Projects like this and Docker make me seriously wonder where software engineering is going. Don't get me wrong, I don't mean to criticize Docker or Toro in parcicular. It's the increasing dependency on such approaches that bothers me.
Docker was conceived to solve the problem of things "working on my machine", and not anywhere else. This was generally caused by the differences in the configuration and versions of dependencies. Its approach was simple: bundle both of these together with the application in unified images, and deploy these images as atomic units.
Somewhere along the lines however, the problem has mutated into "works on my container host". How is that possible? Turns out that with larger modular applications, the configuration and dependencies naturally demand separation. This results in them moving up a layer, in this case creating a network of inter-dependent containers that you now have to put together for the whole thing to start... and we're back to square one, with way more bloat in between.
Now hardware virtualization. I like how AArch64 generalizes this: there are 4 levels of privilege baked into the architecture. Each has control over the lower and can call up the one immediately above to request a service. Simple. Let's narrow our focus to the lowest three: EL0 (classically the user space), EL1 (the kernel), EL2 (the hypervisor). EL0, in most operating systems, isn't capable of doing much on its own; its sole purpose is to do raw computation and request I/O from EL1. EL1, on the other hand, has the powers to directly talk to the hardware.
Everyone is happy, until the complexity of EL1 grows out of control and becomes a huge attack surface, difficult to secure and easy to exploit from EL0. Not good. The naive solution? Go a level above, and create a layer that will constrain EL1, or actually, run multiple, per-application EL1s, and punch some holes through for them to still be able to do the job—create a hypervisor. But then, as those vaguely defined "holes", also called system calls and hyper calls, grow, won't so the attack surface?
Or in other words, with the user space shifting to EL1, will our hypervisor become the operating system, just like docker-compose became a dynamic linker?
Containers got popular at at time when there were an increasingly number of people that were finding it hard to install software on their system locally - especially if you were, for instance, having to juggle multiple versions of ruby or multiple versions of python and those linked to various major versions of c libraries.
Unfortunately containers have always had an absolutely horrendous security story and they degrade performance by quite a lot.
The hypervisor is not going away anytime soon - it is what the entire public cloud is built on.
While you are correct that containers do add more layers - unikernels go the opposite direction and actively remove those layers. Also, imo the "attack surface" is by far the smallest security benefit - other architectural concepts such as the complete lack of an interactive userland is far more beneficial when you consider what an attacker actually wants to do after landing on your box. (eg: run their sotware)
When you deploy to AWS you have two layers of linux - one that AWS runs and one that you run - but you don't really need that second layer and you can have much faster/safer software without it.
This results in them moving up a layer, in this case creating a network of inter-dependent containers that you now have to put together for the whole thing to start... and we're back to square one, with way more bloat in between.
I think you're over-egging the pudding. In reality, you're unlikely to use more than 2 types of container host (local dev and normal deployment maybe), so I think we've moved way beyond square 1. Config is normally very similar, just expressed differently, and being able to encapsulate dependencies removes a ton of headaches.
I use LXD + LXC, wondering if this is worth trying or if the overhead of accessing (network, etc) would be too much to deal with/care about.
Also always a little wary of projects that have bad typos or grammar problems in a README - in particular on one of the headings (thought it's possible these are on purpose?). But that's just me :\
I don't want the observability of my applications to be bound by themselves, it's kind of a real pain. I'm all for microvm images without excess dependencies, but coupling the kernel and diagnostic tools to rapidly developing application code can be a real nightmare as soon as the sun stops shining.
I've been using unikraft (https://unikraft.org/) unikernels for a while and the startup times are quite impressive (easily sub-second for our Rust application).
My last name is finally on the front page of HN as a project name, look mah!
I was not expecting Pascal, thats an interesting choice. One thing I do like is that Freepascal has one of the better ways of making GUIs meanwhile every other language had decided that just letting Javascript build UIs is the way.
Presumably to avoid the cost of context switches or copying between kernel/user address spaces? Looks to be the opposite of userspace networking like DPDK: kernel space application programming.
I think one reason UniKernels can be different are perhaps that they can allow more isolation or run user generated code perhaps inside the Unikernel with proper isolation whereas I don't think actors can do that
Projects like this and Docker make me seriously wonder where software engineering is going. Don't get me wrong, I don't mean to criticize Docker or Toro in parcicular. It's the increasing dependency on such approaches that bothers me.
Docker was conceived to solve the problem of things "working on my machine", and not anywhere else. This was generally caused by the differences in the configuration and versions of dependencies. Its approach was simple: bundle both of these together with the application in unified images, and deploy these images as atomic units.
Somewhere along the lines however, the problem has mutated into "works on my container host". How is that possible? Turns out that with larger modular applications, the configuration and dependencies naturally demand separation. This results in them moving up a layer, in this case creating a network of inter-dependent containers that you now have to put together for the whole thing to start... and we're back to square one, with way more bloat in between.
Now hardware virtualization. I like how AArch64 generalizes this: there are 4 levels of privilege baked into the architecture. Each has control over the lower and can call up the one immediately above to request a service. Simple. Let's narrow our focus to the lowest three: EL0 (classically the user space), EL1 (the kernel), EL2 (the hypervisor). EL0, in most operating systems, isn't capable of doing much on its own; its sole purpose is to do raw computation and request I/O from EL1. EL1, on the other hand, has the powers to directly talk to the hardware.
Everyone is happy, until the complexity of EL1 grows out of control and becomes a huge attack surface, difficult to secure and easy to exploit from EL0. Not good. The naive solution? Go a level above, and create a layer that will constrain EL1, or actually, run multiple, per-application EL1s, and punch some holes through for them to still be able to do the job—create a hypervisor. But then, as those vaguely defined "holes", also called system calls and hyper calls, grow, won't so the attack surface?
Or in other words, with the user space shifting to EL1, will our hypervisor become the operating system, just like docker-compose became a dynamic linker?
Containers got popular at at time when there were an increasingly number of people that were finding it hard to install software on their system locally - especially if you were, for instance, having to juggle multiple versions of ruby or multiple versions of python and those linked to various major versions of c libraries.
Unfortunately containers have always had an absolutely horrendous security story and they degrade performance by quite a lot.
The hypervisor is not going away anytime soon - it is what the entire public cloud is built on.
While you are correct that containers do add more layers - unikernels go the opposite direction and actively remove those layers. Also, imo the "attack surface" is by far the smallest security benefit - other architectural concepts such as the complete lack of an interactive userland is far more beneficial when you consider what an attacker actually wants to do after landing on your box. (eg: run their sotware)
When you deploy to AWS you have two layers of linux - one that AWS runs and one that you run - but you don't really need that second layer and you can have much faster/safer software without it.
Linux containers you mean.
The story is quite different in HP-UX, Aix, Solaris, BSD, Windows, IBM i, z/OS,...
This results in them moving up a layer, in this case creating a network of inter-dependent containers that you now have to put together for the whole thing to start... and we're back to square one, with way more bloat in between.
I think you're over-egging the pudding. In reality, you're unlikely to use more than 2 types of container host (local dev and normal deployment maybe), so I think we've moved way beyond square 1. Config is normally very similar, just expressed differently, and being able to encapsulate dependencies removes a ton of headaches.
Bryan Cantrill, "Unikernels are unfit for production". [0]
[0]: https://www.tritondatacenter.com/blog/unikernels-are-unfit-f...
Toro provides a GDB stub so there has been a little progress since that time.
There's a man who hasn't tried running qubes-mirage-firewall.
Unikernels don't work for him; there are many of us who are very thankful for them.
> there are many of us who are very thankful for them.
Why? Can you explain, in light of the article, and for those of us who may not be familiar with qubes-mirage-firewall, why?
I use LXD + LXC, wondering if this is worth trying or if the overhead of accessing (network, etc) would be too much to deal with/care about.
Also always a little wary of projects that have bad typos or grammar problems in a README - in particular on one of the headings (thought it's possible these are on purpose?). But that's just me :\
I don't want the observability of my applications to be bound by themselves, it's kind of a real pain. I'm all for microvm images without excess dependencies, but coupling the kernel and diagnostic tools to rapidly developing application code can be a real nightmare as soon as the sun stops shining.
I've been using unikraft (https://unikraft.org/) unikernels for a while and the startup times are quite impressive (easily sub-second for our Rust application).
What drove you to choose that over something like containers?
shorter cold-boot times.
(2020) Currently, seems to have been around since 2011 (https://news.ycombinator.com/item?id=3288786) although at a few different domains (torokernel.org, torokernel.io)
I wonder how it compares to https://mirage.io/
Isn't Mirage OCaml only?
It's written in... Pascal...
Neat.
My last name is finally on the front page of HN as a project name, look mah!
I was not expecting Pascal, thats an interesting choice. One thing I do like is that Freepascal has one of the better ways of making GUIs meanwhile every other language had decided that just letting Javascript build UIs is the way.
Oh holy crap that's actually super cool. One of the first languages I (tried to) learn ... at 13. And failed.
Now I write Javascript and SQL.
:)
What's the use case for this rather than containers? Separation from the hypervisor kernel?
Containers (docker/podman) are still not as secure as virtualization (qemu,kvm,proxmox)
Plus these might be smaller and might run faster than containers too.
yeah it's a fairy tale.
Presumably to avoid the cost of context switches or copying between kernel/user address spaces? Looks to be the opposite of userspace networking like DPDK: kernel space application programming.
It can be much faster, and much smaller surface area for attacks than using a full Linux kernel.
it is using qemu's network stack, would like to know how performant it is.
reminds me of actors, they are sharing messages between kernels with a bus
file sharing is complex too it seems
would be good to see a benchmark or something showing where it shines
I think one reason UniKernels can be different are perhaps that they can allow more isolation or run user generated code perhaps inside the Unikernel with proper isolation whereas I don't think actors can do that