I'm always torn when I see anything mentioning running an init system in a container. On one hand, I guess it's good that it's designed with that use case in mind. Mainly, though, I've just seen too many overly complicated things attempted (on greenfield even) inside a single container when they should have instead been designed for kubernetes/cloud/whatever-they-run-on directly and more properly decoupled.
It's probably just one of those "people are going to do it anyway" things. But I'm not sure if it's better to "do it better" and risk spreading the problem, or leave people with older solutions that fail harder.
Yes, application containers should stick to the Unix philosophy of, "do one thing and do it well." But if the thing in your docker container forks for _any_ reason, you should have a real init on PID 1.
And your software can do it, if it's written with the assumption that it will be pid1, but most non-init software isn't. And rather than write your software to do so, it's easier to just reach for something like tini that does it already with very little overhead.
I'd recommend reading the tini readme[0] and its linked discussion for full detail.
From my experience in the robotics space, a lot of containers start life as "this used to be a bare metal thing and then we moved it into a container", and with a lot of unstructured RPC going on between processes, there's little benefit in breaking up the processes into separate containers.
Supervisor, runit, systemd, even a tmux session are all popular options for how to run a bunch of stuff in a monolithic "app" container.
My experience in the robotics space is that containers are a way to not know how to put a system together properly. It's the quick equivalent of "I install it on my Ubuntu, then I clone my whole system into a .iso and I call that a distribution". Most of the time distributed without any consideration for the open source licences being part of it.
I've always advocated against containers as a means of deploying software to robots simply because to my mind it doesn't make sense— robots are full of bare-metal concerns, whether it's udev rules, device drivers, network config, special kernel or bootloader setup, never mind managing the container runtime itself including startup, updating, credentials, and all the rest of it. It's always felt to me like by the time you put in place mechanisms to handle all that crap outside the container, you might as well just be building a custom bare metal image and shipping that— have A/B partitions so you copy an update from the network to the other partition, use grub chainloading, wipe hands on pants.
The concern regarding license-adherence is orthogonal to all that but certainly valid. I think with the ROS ecosystem in particular there is a lot of "lol everything is BSD/Apache2 so we don't even have to think about it", without understanding that these licenses still have an attribution requirement.
> Supervisor, runit, systemd, even a tmux session are all popular options for how to run a bunch of stuff in a monolithic "app" container.
Did docker+systemd get fixed at some point? I would be surprised to hear that it was popular given the hoops you had to jump through last time I looked at it
It's only really fixed in podman, with the special `--systemd=always` flag. Docker afaik still requires manually disabling certain services that will conflict with the host and then running the whole thing as privileged— basically, a mess.
Not my favoured approach, but for early stage systems where proper off-board observability/alerting is not yet in place, tmux can function as a kind of ssh-accessible dashboard displaying the stdout of key running processes, and also allowing some measure of inline recovery— like if a process has crashed, you can up-arrow and relaunch it in the same environment it crashed out of.
Obviously not an approach that scales, but I think it can also work decently well as a dev environment, where you want to run "stock" for most of the components in the system, and just be syncing in an updated workspace and restarting the one bit being actively developed on. Being able to do this without having to reason about a whole tree of interlinked startup units or whatever does lower the barrier to entry somewhat.
I would like a comparison with runit, which is a very minimal but almost full-fledged init system. I see many similarities: control directories, no declarative dependencies, a similar set of scripts, the same approach to logging. The page mentions runit in passing, and even suggests using the chpst utility from it.
One contrasting feature is parametrized services: several similar processes (like agetty) can be controlled by one service directory; I find it neat.
Another difference is the ability to initiate reboot or shutdown as an action of the same binary (nitroctl).
Leah Neukirchen is active member of the Void Linux community, I expect a lot of cross-pollination here. It would be really great if she could write up something how to use it for Void.
I've gotten used to runit via Void Linux, and while it does the job of an init system, its UI and documentation leave something to be desired. The way logging is configured in particular was an exercise in frustration the last time I tried to set it up for a service.
I wouldn't mind trying something else that is as simple, but has sane defaults, better documentation, and a more intuitive UI.
Logging in runit seems simple (I don't remember running into problems), but indeed, the documentation leaves much to be desired. Could be a good thing to contribute to Void Handbook.
Nitro does not declaratively handle service dependencies, you cannot get a neat graph of them in one command.
You can still request other services to start in your setup script, and expect nitro to wait and retry starting your service when the dependent service is running. To get a nice graph, you can write a simple script using grep. OTOH it's easy to forget to require the shutdown of the dependent services when your service goes down, and there's no way to discover it using a nitro utility.
How does this compare to s6? I recently used it to setup an init system in docker containers & was wondering if nitro would be a good alternative (there's a lot of files I had to setup via s6-overlay that wasn't as intuitive as I would've hoped).
Yeah we only recently broke it out as a standalone repo/binary, as everyone historically vendored it, so docs will get love soon, but it will be part of the next stagex release built and signed by multiple parties deterministically as stagex/user-nit.
To run it all your need to know is put it in your filesystem as "/init" and then add this to your kernel command line for the binary you want nit to pivot to after bringing the system up:
nit.target=/path/to/binary
That's it. Minimum viable init for single application appliance/embedded linux use cases.
nit and your target binary are the only things you actually need to have in your CPIO root filesystem. Can be empty otherwise.
Any well-known generic word is very likely to already have been used by a bunch of projects, some of them already prominent. By now, the best project name is a pronounceable but unique string, for ease of search engine use. Ironically, "systemd" is a good name in this regard, as are "runit" or even "s6".
She credits runit and daemontools as inspiration, and it looks extremely similar. I hope that at some point she writes a comparison explaining what Nitro does differently from runit and why.
I'm always torn when I see anything mentioning running an init system in a container. On one hand, I guess it's good that it's designed with that use case in mind. Mainly, though, I've just seen too many overly complicated things attempted (on greenfield even) inside a single container when they should have instead been designed for kubernetes/cloud/whatever-they-run-on directly and more properly decoupled.
It's probably just one of those "people are going to do it anyway" things. But I'm not sure if it's better to "do it better" and risk spreading the problem, or leave people with older solutions that fail harder.
I've used several hosting providers that charge by the container - Fly.io and Render and Google Cloud Run.
I often find myself wanting to run more than one process in s container for pricing reasons.
Yes, application containers should stick to the Unix philosophy of, "do one thing and do it well." But if the thing in your docker container forks for _any_ reason, you should have a real init on PID 1.
is there any issue besides the potential zombies? also, why can't the real pid1 do it? it sees all the processes after all.
Mostly just zombies and signal handlers.
And your software can do it, if it's written with the assumption that it will be pid1, but most non-init software isn't. And rather than write your software to do so, it's easier to just reach for something like tini that does it already with very little overhead.
I'd recommend reading the tini readme[0] and its linked discussion for full detail.
[0]: https://github.com/krallin/tini
From my experience in the robotics space, a lot of containers start life as "this used to be a bare metal thing and then we moved it into a container", and with a lot of unstructured RPC going on between processes, there's little benefit in breaking up the processes into separate containers.
Supervisor, runit, systemd, even a tmux session are all popular options for how to run a bunch of stuff in a monolithic "app" container.
My experience in the robotics space is that containers are a way to not know how to put a system together properly. It's the quick equivalent of "I install it on my Ubuntu, then I clone my whole system into a .iso and I call that a distribution". Most of the time distributed without any consideration for the open source licences being part of it.
I've always advocated against containers as a means of deploying software to robots simply because to my mind it doesn't make sense— robots are full of bare-metal concerns, whether it's udev rules, device drivers, network config, special kernel or bootloader setup, never mind managing the container runtime itself including startup, updating, credentials, and all the rest of it. It's always felt to me like by the time you put in place mechanisms to handle all that crap outside the container, you might as well just be building a custom bare metal image and shipping that— have A/B partitions so you copy an update from the network to the other partition, use grub chainloading, wipe hands on pants.
The concern regarding license-adherence is orthogonal to all that but certainly valid. I think with the ROS ecosystem in particular there is a lot of "lol everything is BSD/Apache2 so we don't even have to think about it", without understanding that these licenses still have an attribution requirement.
> Supervisor, runit, systemd, even a tmux session are all popular options for how to run a bunch of stuff in a monolithic "app" container.
Did docker+systemd get fixed at some point? I would be surprised to hear that it was popular given the hoops you had to jump through last time I looked at it
It's only really fixed in podman, with the special `--systemd=always` flag. Docker afaik still requires manually disabling certain services that will conflict with the host and then running the whole thing as privileged— basically, a mess.
tmux?! Please share your war stories.
Not my favoured approach, but for early stage systems where proper off-board observability/alerting is not yet in place, tmux can function as a kind of ssh-accessible dashboard displaying the stdout of key running processes, and also allowing some measure of inline recovery— like if a process has crashed, you can up-arrow and relaunch it in the same environment it crashed out of.
Obviously not an approach that scales, but I think it can also work decently well as a dev environment, where you want to run "stock" for most of the components in the system, and just be syncing in an updated workspace and restarting the one bit being actively developed on. Being able to do this without having to reason about a whole tree of interlinked startup units or whatever does lower the barrier to entry somewhat.
I would like a comparison with runit, which is a very minimal but almost full-fledged init system. I see many similarities: control directories, no declarative dependencies, a similar set of scripts, the same approach to logging. The page mentions runit in passing, and even suggests using the chpst utility from it.
One contrasting feature is parametrized services: several similar processes (like agetty) can be controlled by one service directory; I find it neat.
Another difference is the ability to initiate reboot or shutdown as an action of the same binary (nitroctl).
Also, it's a single binary; runit has several.
Leah Neukirchen is active member of the Void Linux community, I expect a lot of cross-pollination here. It would be really great if she could write up something how to use it for Void.
I've gotten used to runit via Void Linux, and while it does the job of an init system, its UI and documentation leave something to be desired. The way logging is configured in particular was an exercise in frustration the last time I tried to set it up for a service.
I wouldn't mind trying something else that is as simple, but has sane defaults, better documentation, and a more intuitive UI.
Logging in runit seems simple (I don't remember running into problems), but indeed, the documentation leaves much to be desired. Could be a good thing to contribute to Void Handbook.
It will be interesting to compare this to dinit[1], which is used by chimera-linux.
Giving the readme a brief scan, it doesn't look like it currently handles service dependencies?
[1]: https://github.com/davmac314/dinit
Nitro does not declaratively handle service dependencies, you cannot get a neat graph of them in one command.
You can still request other services to start in your setup script, and expect nitro to wait and retry starting your service when the dependent service is running. To get a nice graph, you can write a simple script using grep. OTOH it's easy to forget to require the shutdown of the dependent services when your service goes down, and there's no way to discover it using a nitro utility.
Thanks for the info!
If I my plug my friend and colleague's work, https://nixos.org/manual/nixos/unstable/#modular-services has just landed in Nixpkgs.
This will be a game changer for porting to NixOS to new init systems, and even new kernels.
So, it's good time to be experimenting with things like Nitro here!
How does this compare to s6? I recently used it to setup an init system in docker containers & was wondering if nitro would be a good alternative (there's a lot of files I had to setup via s6-overlay that wasn't as intuitive as I would've hoped).
S6 is way more complex and rich. Nitro or runit would be simpler alternatives; maybe even https://github.com/krallin/tini.
At Distrust, we wrote a dead simple init system in rust that is used by a few clients in production with security critical enclave use cases.
<500 lines and uses only the rust standard library to make auditing easy.
https://git.distrust.co/public/nit
Likely neat (33% larger than nit), but the readme only explains how to build it, not its interface or functioning.
Yeah we only recently broke it out as a standalone repo/binary, as everyone historically vendored it, so docs will get love soon, but it will be part of the next stagex release built and signed by multiple parties deterministically as stagex/user-nit.
To run it all your need to know is put it in your filesystem as "/init" and then add this to your kernel command line for the binary you want nit to pivot to after bringing the system up:
nit.target=/path/to/binary
That's it. Minimum viable init for single application appliance/embedded linux use cases.
nit and your target binary are the only things you actually need to have in your CPIO root filesystem. Can be empty otherwise.
So it's basically like tini (keep a single executable running), but in Rust?
The name & function overlap with AWS Nitro is severe:
https://docs.aws.amazon.com/whitepapers/latest/security-desi...
I'd recommend changing names, nitro is already a semi-popular server engine for node.js https://nitro.build/
Any well-known generic word is very likely to already have been used by a bunch of projects, some of them already prominent. By now, the best project name is a pronounceable but unique string, for ease of search engine use. Ironically, "systemd" is a good name in this regard, as are "runit" or even "s6".
I use tiny init systems regularly in AWS Nitro Enclaves. Having the enclave and init system both named nitro is not ideal.
Dinit, runit, tini -- all avoid the name clash :)
love to see new init projects. how does it stack up against runit (the last one i really familiarized myself with on void linux)?
She credits runit and daemontools as inspiration, and it looks extremely similar. I hope that at some point she writes a comparison explaining what Nitro does differently from runit and why.