Gentoo is the best! Once you get the hang of creating a bootable system and feel comfortable painting outside the lines, it feels like Linux from Scratch just without needing to manually build everything. I automated building system images with just podman (to build the rootfs) and qemu (test boot & write the rootfs, foreign arch emulation) and basically just build new system images once a week w/ CI for all my hardware + rsync to update. Probably one of the coolest things I’ve ever built, at this point I’m effectively building my own Linux distro from source and it’s all defined in Containerfiles! I have such affection for the Gentoo team for enabling this project, shocking to discover how little they operate on I’m definitely setting up a recurring donation.
I think it is a great learning opportunity, but after using Gentoo for a decade or so, I prefer Arch these days. So if you want to learn more about Linux and its ecosystems, go for it, do it for a few months or years.
That said, I haven't tried Gentoo with binaries from official repositories yet. Maybe that makes it less time-consuming to keep your system up to date.
Been happily and very successfully using the official binpkgs, it works really well, sometimes there's a slight delay for the binary versions of the source packages to appear in the repositories, but that's about it. I guess it's kind of running Arch, but with portage <3! And the occasional compilation because your use flags didn't really match the binaries
For me, the most underrated takeaway here is the state of RISC-V support.
While other distributions are struggling to bootstrap their package repositories for new ISAs and waiting for build farms to catch up, Gentoo's source based nature makes it architecture agnostic by definition. I applaud the risque team for having achieved parity with amd64 for the @system set. This proves that the meta-distribution model is the only scalable way to handle the explosion of hardware diversity we are seeing post 2025. If you are building an embedded platfrm or working on custom silicon, Gentoo is a top tier choice. You cross-compile the stage1 and portage handles the rest.
While I was always a sourced-base/personalized distribution personality type, this is also a big part of why I moved to Gentoo in early 2004 (for amd64, not Risc-V / other embedded per your example). While Pentium-IV's very deep pipelines and compiler flag sensitivities (and the name itself for the fastest Penguin) drove the for-speed perception of the compile-just-for-my-system style, it really plays well to all customization/configuation hacker mindsets.
That is a fantastic historical parallel. The early amd64 days were arguably Gentoo's killer app moment. While the binary distributions were wrestling with the logistical nightmare of splitting repositories and figuring out the /lib64 vs /lib standard, Gentoo users just changed their CHOST, bootstrapped and were running 64-bit native. You nailed the psychology of it, too. The speed marketing was always a bit of a red herring. The ability to say "I do not want LDAP support in my mail client" and have the package manager actually respect that is cool. It respects the user's intelligence rather than abstracting it away.
Since you've been on the ride since '04, I'm curious to hear your thoughts. How do you feel the maintenance burden compares today versus the GCC 3.x era? With the modern binhost fallback and the improvements in portage, I feel like we now spend less time fighting rebuild loops than back then? But I wonder if long time users feel the same.
> Since you've been on the ride since '04, I'm curious to hear your thoughts. How do you feel the maintenance burden compares today versus the GCC 3.x era? With the modern binhost fallback and the improvements in portage, I feel like we now spend less time fighting rebuild loops than back then? But I wonder if long time users feel the same.
I'm another one on it since the same era :)
In general stable has become _really_ stable, and unstable is still mostly usable without major hiccups. My maintenance burden is limited nowadays compared to 10y ago - pretty much running `emerge -uDN @world --quiet --keep-going` and fixing issues if any, maybe once a month I get package failures but I run a llvm+libcxx system and also package tests, so likely I get more issues than the average user on GCC.
For me these days it's not about the speed anymore of course, but really the customization options and the ability to build pretty much anything I need locally. I also really like the fact that ebuilds are basically bash scripts, and if I need to further customize or reproduce something I can literally copy-paste commands from the package manager in my local folder.
The project has successfully implemented a lot of by-default optimizations and best practices, and in general I feel the codebases for system packages have matured to the point where it's odd to run in internal compiler errors, weird dependency issues, whole-world rebuilds etc. From my point of view it also helped a lot that many compilers begun enforcing more modern and stricter C/C++ standards over time, and at the same time we got Github, CI workflows, better testing tools etc.
I run `emerge -e1 @world` maybe once a year just to shake out stuff lurking in the shadows (like stuff compiled with clang 19 vs clang 21), but it's really normally not needed anymore. The configuration stays pretty much untouched unless I want to enable a new USE for a new package I'm installing.
> The ability to say "I do not want LDAP support in my mail client" and have the package manager actually respect that is cool.
I tried Gentoo around the time that OP started using it, and I also really liked that aspect of it. Most package managers really struggle with this, and when there is configuration, the default is usually "all features enabled". So, when you want to install, say, ffmpeg on Debian, it pulls in a tree of over 250 (!!) dependency packages. Even if you just wanted to use it once to convert a .mp4 container into .mkv.
To be fair it was not that difficult to set create a pure 64 bit binary distro and there were a few of them. The real issue was to figure out how to do mixed 32/64 bit and this is where the fight about /lib directories originated. In a pure 64 bit distro the only way to run 32 bit binaries was to create a chroot with a full 32 bit installation. It took a while before better solutions were agreed to. This was an era of Flash and Acrobat Reader - all proprietary and all 32 bit only so people really cared about 32 bit.
gcc 3.3 to 3.4 was a big thing, and could cause some issues if people didnt follow the upgrade procedures, and also many c++ codebases would need minor adjustments.. this has been much much less of a problem since.
Additionally gentoo has become way more strict with use flag dependencies, and it also checks if binaries are depending on old libs, and doesnt remove them when updating a package, such that the "app depends on old libstdc++" doesnt happen anymore. It then automatically removes the old when nothing needs it anymore
I have been running gentoo since before 04, continously, and things pretty much just work. I would be willing to put money that I spend less time "managing my OS" than most who run other systems such as osx, windows, debian etc. Sure, my cpu gets to compile a lot, but thats about it.
And yes, the "--omg-optimize" was never really the selling point, but rather the useflags, where theres complete control. Pretty much nothing else comes close, and it is why gentoo is awesome
All distributions are source based and bootstrapped from source. They default to binary packages by default (while offering source packages) whereas Gentoo defaults to source packages (but still has binary packages). There's literally no advantage to Gentoo here. What you're saying doesn't even make logical sense.
Other distros don't support Risc-V because nobody has taken the time to bother with it because the hardware base is almost nonexistent.
> The Gentoo Foundation took in $12,066 in fiscal year 2025 (ending 2025/06/30); the dominant part (over 80%) consists of individual cash donations from the community. On the SPI side, we received $8,471 in the same period as fiscal year 2025; also here, this is all from small individual cash donations.
It's crazy how projects this large and influential can get by on so little cash. Of course a lot of people are donating their very valuable labour to the project, but the ROI from Gentoo is incredible compared to what it costs to do anything in commercial software.
This is, in a way, why it's nice that we have companies like Red Hat, SUSE and so on. Even if you might not like their specific distros for one reason or another, they've found a way to make money in a way where they contribute back for everything they've received. Most companies don't do that.
Red Hat contributes to a broad spectrum of Linux packages, drivers, and of course the kernel itself [1].
One example is virtualization: the virtio stack is maintained by Red Hat (afaik). This is a huge driver behind the “democratization” of virtualization in general, allowing users and small companies to access performant virt without selling a kidney to VMware.
Also, Red Hat contributes to or maintains all of the components involved in OpenShift and OpenStack (one of which is virtio!).
Why should Red Hat be expected to contribute to Gentoo? A distro is funded by its own users. What distro directly contributes to another distro if it’s not a derivative or something?
Red Hat primarily contributes code to the kernel and various OSS projects, paid for by the clients on enterprise contracts. A paying client needs something and it gets done. Then the rest of us get to benefit by receiving the code for free. It’s a beautiful model.
If you look at lists of top contributors, Red Hat (along with the usual suspects in enterprise) are consistently at the top.
As others mentioned, Red Hat (and SUSE) has been amazing for the overall Linux community. They give back far more than what the GPL requires them to. Nearly every one of their paid "enterprise" products has a completely free and open source version.
For example:
- Red Hat Identity Management -> FreeIPA (i.e. Active Directory for Linux)
- Red Hat Satellite -> The Foreman + Katello
- Ansible ... Ansible.
- Red Hat OpenShift -> OKD
- And more I'm not going to list.
It looks like they're second to Intel, at least by LF's metric. That said driver code tends to be take up a lot of space compared to other areas. Just look at the mass of AMD template garbage here: https://github.com/torvalds/linux/tree/master/drivers/gpu/dr...
Intel has long been a big contributor--mostly driver stuff as I understand it. (Intel does a lot more software work than most people realize.) Samsung was pretty high on the list at one point as well. My grad school roommate (now mostly retired though he keeps his hand in) was in the top 10 individual list at one point--mostly for networking-related stuff.
SuSE/openSuSE is innovating plenty of stuff which other distros find it worth to immitate, e.g. CachyOS and omarchy as Arch-derivatives felt that openSuSE-style btrfs snapshots were pretty cool.
It's a rock-solid distro, and if I had a use for enterprise support, I'd probably look into SLES as a pretty serious contender.
The breadth of what they're doing seems unparalleled, i.e. they have rolling release (Tumbleweed), delayed rolling release (Slowroll) which is pretty unique in and of itself, point release (Leap), and then both Tumbleweed and Leap are available in immutable form as well (MicroOS, and Leap Micro respectively), and all of the aforementioned with a broad choice of desktops or as server-focused minimal environments with an impressively small footprint without making unreasonable tradeoffs. ...if you multiply out all of those choices it gives you, it turns into quite a hairy ball of combinatorics, but they're doing a decent job supporting it all.
As far as graphical tools for system administration go, YaST is one of the most powerful and they are currently investing in properly replacing it, now that its 20-year history makes for an out-of-date appearance. I tried their new Agama installer just today, and was very pleased with the direction they're taking.
...so, not quite sure what you're getting at with your "Back in the day..." I, too, remember the days of going to a brick-and-mortar store to buy Linux as a box set, and it was between RedHat and SuSE. Since then, I think they've lost mindshare because other options became numerous and turned up the loudness, but I think they've been quiety doing a pretty decent job all this time and are still beloved by those who care to pay attention.
SUSE has a lot of ex-Red Hatters at high levels these days. Their CEO ran Asia-Pacific for a long time and North America commercial sales for a shorter period.
SUSE has always been pretty big in Europe but never was that prominent in North America except for IBM mainframes, which Red Hat chipped away at over time. (For a period, SUSE supported some mainframe features that Red Hat didn't--probably in part because some Red Hat engineering leadership was at least privately dismissive of the whole idea of running Linux on mainframes.)
Red hat certainly burns a lot of money in service of horrifyingly bad people. It's nice we get good software out of it, but this is not a funding model to glorify. And of course american businesses not producing open source is the single most malignant force on the planet.
I guess Debian, SUSE, Canonical, etc get that email from Red Hat just go along with it. We better make the switch, we don’t want our ::checks notes:: competitor made at us.
I don't know that Red Hat is a positive force. They seem to be on a crusade to make the Linux desktop incomprehensible to the casual user, which I suppose makes sense when their bread and butter depends on people paying them to fix stuff, instead of fixing it themselves.
This, despite the fact that Rocky, Alma, Oracle Enterprise Linux, etc exist because of the hard work and money spent by Red Hat.
And what are those companies doing to fix this issue you claim Red Hat causes? Nothing. Because they like money, especially when all you have to do is rebuild and put your name on other people’s hard work.
And what exactly is incomprehensible? What exactly is it that they’re doing to the Linux desktop that make it so that people can’t fix their own problems? Isn’t the whole selling point of Rocky and Alma by most integrators is that it’s so easy you don’t need red hat to support it?
I think it's fair to say that Red Hat simply doesn't care about the desktop--at least beyond internal systems. You could argue the Fedora folks do to some degree but it's just not a priority and really isn't something that matters from a business perspective at all.
Can you name a company which does care about the linux desktop? Over the years i’m pretty sure redhat contributed a great deal to various desktop projects, can’t think of anyone who contributed more.
Well Red Hat did make a go at a supported enterprise desktop distro for a time and, as I wrote, Fedora--which Red Hat supports in a variety of ways for various purposes--is pretty much my default Linux distro.
So I'm not being critical. Yes, Red Hat employees do contribute to projects that are most relevant to the desktop even if doing so is not generally really the focus of their day jobs. And, no, other companies almost certainly haven't done more.
It's not just systemd, though. You have to look at the whole picture, like the design of GNOME or how GTK is now basically a GNOMEy toolkit only (and if you dare point this out on reddit, ebassi may go ballistics). They kind of take more and more control over the ecosystem and singularize it for their own control. This is also why I see the "wayland is the future", in part, as means to leverage away even more control; the situation is not the same, as xorg-server is indeed mostly just in maintenance work by a few heroes such as Alanc, but wayland is primarily, IMO, a IBM Red Hat project. Lo and behold, GNOME was the first to mandate wayland and abandon xorg, just as it was the first to slap down systemd into the ecosystem too.
The usual semi conspiratorial nonsense. GNOME is only unusable to clickers that are uncomfortable with any UI other than what was perfected by windows 95. And Wayland? Really? Still yelling at that cloud?
I expect people will stop yelling about Wayland when it works as reliably as X, which is probably a decade away. I await your "works for me!" response.
I don't get your point. People regularly complain that Wayland has lots of remaining issues and there are always tedious "you're wrong because it works perfectly for me!" replies, as if the fact that it works perfectly for some people means that it works perfectly for everyone.
These days Wayland is MUCH smoother than X11 even with an Nvidia graphics cards. With X11, I occasionally had tearing issues or other weird behavior. Wayland fixed all of that on my gaming PC.
NixOS is anything but a light abstraction (I say this as a NixOS user).
Tbh it feels like NixOS is convenient in a large part because of systemd and all the other crap you have to wire together for a usable (read compatible) Linux desktop. Better to have a fat programming language, runtime and collection of packages which exposes one declarative interface.
Much of this issue is caused by the integrate-this-grab-bag-of-tools-someone-made approach to system design, which of course also has upsides. Redhat seems to be really helping with amplifying the downsides by providing the money to make a few mediocre tools absurdly big tho.
I'm sorry but this is just completely disconnected from reality. Wayland is being successfully used every single day. Just because you don't like something doesn't mean it's inherently bad.
This was exactly what I was going to comment on. Why are they not spending more money?? I don't even know what they should spend it on, but like.. it's Gentoo! I would have thought they'd pay the core devs something?
It would be interesting to have a more accurate estimate of the effective cost of maintaining Gentoo. Say 100 core developers spend 10h/week, and 380 external contributors 2h/week; that's well over 40 FTE, and at $150K per FTE that's $6 million a year.
...is Gentoo large and influential these days? As far as I'm aware, its current cultural status is that of a punchline, but I'm open to being corrected.
The issue is that gentoo isn’t very popular in the industry. If it catches on with a few well funded tech companies, then it’s easy to get $10k or so from each one in sponsorships at conferences.
2025 I switched to nixos and will probably stay. I used gentoo for like 20 years. Its the distro of my heart.
With some notebooks, some of which were getting on in years, it was simply too resource-intensive to update. Only GHC, for example, often took 12+ hours to compile on some older notebooks.
I tried to list available packages on NixOS and nix-env consumed more than 6 GB Ram. Everyone told me not to use nix-env; everyone except NixOS manual. Trying to understand NixOS environment is a deep rabbit hole.
The Nix documentation is what drove me away from it years ago when I tried. I ended up landing on GNU Guix, where I have been for about 5 years now. I found the OS documentation to be much nicer (info pages!) and the decades of Scheme documentation makes the language easier to pick up too.
Been using Gentoo since 2004 on all my machines. They won me over after I started playing around with their Unreal Tournament demo ISO.
The game changer for me was using my NAS as a build host for all my machines. It has enough memory and cores to compile on 32 threads. But a full install from a stage3 on my ageing Thinkpad X13 or SBCs would fry the poor things and just isn't feasible to maintain.
I have systemd-nspawn containers for the different microarchitectures and mount their /var/cache/binpkgs and /etc/portage dirs over NFS on the target machines. The Thinkpad can now do an empty tree emerge in like an hour and leaving out the bdeps cuts down on about 150 packages.
Despite being focused on OpenRC, I have had the most pleasant experience with systemd on Gentoo over all the other distros I've tried.
I'm so interested to learn more about this. Do you still run all your emerge commands on the thinkpad? What's the benefit of mounting /etc/portage over nfs?
I have this dream of moving all my ubuntu servers to gentoo but I don't have a clear enough picture of how to centralize management of a fleet of gentoo machines
Gentoo has many smart people. Having said that, I can't help but feel that ever since the rise of Arch, Gentoo lost a lot of grounds. This may not be primarily due to Arch, but it kind of felt that way to me. I feel that the Gentoo devs should really look at its main competitors such as Void or Arch, IMO. These seem to be more like a modern Gentoo, even if they are different and have a different focus too.
Neither Void or Arch are a "modern Gentoo". Gentoo is it's own thing. If anything, Gentoo's closest "competitors" in terms of OS customisation would be NixOS or Guix, not Void or Arch, but Gentoo is forging it's own path, it doesn't need to follow any other distro.
Arch is the reason I didn't choose Gentoo for my latest build. It's convenient and "good enough" for all my use-cases. Gentoo gives you the feeling of being fully connected to the computer like no other OS - the kind that leaves you nostalgic - but it also requires a time commitment.
I used Gentoo for ten years (2005–2015), and I was very happy with it! Stable was not the word I would use, in that updating frequently broke and required manual intervention. But it was so flexible! The easily accessible options one has for choosing everything about the system is unparalleled in any system I have tried since. I would still use it if I had more tinkering time. These days I am on NixOS, mostly to have the same setup on every machine I use.
I think Gentoo is very stable, but you have to make use of revdep-rebuild and know what you are doing (meaning: it is easy to shoot yourself in the foot).
Hah, same! NixOS is perfect for me; I love the declarative aspect. But Portage is far-and-away the best traditional package manager I've ever used. It's truly phenomenal.
What Gentoo really needs is an official immutability mechanism like ostree used by Fedora Silverblue or ZFS/btrfs snapshots of the root/boot volumes. This way the ever-experimental nature of the distro would be compensated by having an easy mechanism to rollback to previous known-good builds.
I haven't used it in years, but when I was first using Linux I used Gentoo for a long time. Building Gentoo from scratch really helped me learn a lot and probably more quickly than dual-booting a system like I had been. I'll always have a soft spot for Gentoo.
I used to run gentoo like 14 years ago! It remains one of the fastest distros I've seen for the specific hardware it was running on (high core count 4-socket AMD opteron servers) and I mostly attributed that to the fact it was compiling everything (even the base os in this case!) for that specific CPU at install time... emerge would build/compile and if you set your USE flags correctly it produced heavily tailored and optimized binaries. I feel like a staged/graduated (downloading/running precompiled initially while a flag-optimized compile runs in the background) would be a good way to get around some of the downsides here (namely that it takes 45 minutes to install firefox with emerge/pacman and that builds fail more often than packages fail to install).
Very cool to see that it's still going strong - I remember managing many machines at scale was a bit of a challenge, especially keeping ahead of vulnerabilities.
I saw a comment in a "I moved from Windows to Linux" thread implying Windows has more configuration potential than Linux. I wonder what that commenter would make of Gentoo.
I wish I had more time I could dedicate to maintaining my system, I'm marooned on Arch due to lack of time, such a shame.
Impressive recap! The work on RISC-V images, Gentoo for WSL, and EAPI 9 really shows how adaptable Gentoo is.
I’m curious about the trend of fewer commits and bug reports—do you think it’s just natural stabilization, or are contributors slowing down? Also, the move from GitHub to Codeberg is bold; how is the community reacting to that change so far?
Would love to hear more about how new contributors are finding the transition and onboarding with these updates.
I used Gentoo back in 2003. It’s nice to see that it’s still going strong. I don’t have as much free time now it’s not the distro for me, but perhaps when I retire I will come back to it.
I have not directly used Gentoo in years. It was chosen so I could learn, maximize system performance, and have proper AMD64 support before the other distros supported the new CPU specs. Gentoo also had the best documentation in those years.
Id Software provided a Doom 3 Linux client when the game was first released. I found Doom 3 ran better on a custom built Gentoo Linux system compared to Windows XP.
Are you look at Gentoo to maximize performance with compiling everything with custom build parameters and kernel configuration versus pre-built binaries and a generic kernel loaded with modules?
Custom Gentoo just adds more time with having to wait to install software upgrades. It is like having all your Arch packages only being provided by AUR. There is also a chance the build will fail and the parameters might need to be changed. Majority of the time everything compiles without issue once the build parameters are figured out. It was rare when something did not.
Tecnically with just a kernel optimized for your CPU, realtime patches, NTSync and a custom MESA build (with -O2 and -march set to your CPU) would give a good boost instead of trying to recompile verything.
In my experience (this was about 5 years ago mind you) it was no more complex than an arch installation, but with a smaller community and less documentation.
General administration is similar to Arch or any other regular distro. Package updates necessarily take longer because of recompiling but that's just CPU time. There are precompiled versions of big popular binaries (open office, Firefox, etc) that allow you to save a lot of time if you want.
Where you lose time is in trying to optimize your system and packages using the multiple switches that Gentoo provides. If you're the OCD twiddler type, Gentoo can be both extremely satisfying and major time sink.
I don't understand the time sink. Isn't spending time knowing intricate details about your system a good thing? You know better than most if you've gone that deep.
TLDR: Installation is a pain, initial configuration is a pain and there's always something more to tweak, update is a lesser pain, but still a pain. But it's fun, BDSM-style...
Installation is done by booting a liveCD, manually partitioning your storage, unpacking a Gentoo STAGE3 archive, chrooting in it, doing basic configuration such as network, timezone, portage (package manager) base profile and servers, etc., compiling and installing a kernel and then rebooting into the new system.
Then you get to play with /etc/portage/make.conf which is the root configuration of the package manager. You get to set CPU instruction sets (CPU_FLAGS), gcc CFLAGS flags, MAKE flags, video card targets, acceptable package licenses, global USE flags (those are simplified ./configure arguments that usually apply to several packages), which Apache modules get built, which qemu targets get built, etc. These are all env vars that portage (the package manager) uses to build packages for your system.
The more you use Gentoo, the more features of make.conf you discover. Never ending fun.
Then, you start installing packages and updates (same procedure):
1) You start the update by reviewing USE flags for each added/updated package - several screens of dense text.
For example, PHP has these USE flags: https://packages.gentoo.org/packages/dev-lang/php - mouse hover to see what they do. You get to play with them in /etc/portage/package.use and there's no end to tweaking them.
If you have any form of OCD, stay away from Gentoo or this will be your poison forever!
2) Then the compilation begins and that takes hours or days depending on what you install and uses a lot of CPU and either storage I/O or memory (if you have lots of memory, you can compile in a tmpfs a lot faster).
I'm not sure it is OK to compile the updates on a live server, especially during busy hours, but Gentoo has alternatives, including binary packages (recently added, but must match your USE flags with theirs), building packages remotely on another system (distcc), even on a different arch (crossdev). You could run an ARM server and build packages for it on a x86 workstation. I didn't use "steve", so I can't tell you what wonderful things that tool can do, yet.
3) Depending on architecture, some less used packages may fail to compile. You get to manually debug that and submit bug reports. You can also add patches to /etc/portage/patches/<package> that will automatically be applied when the package is built, and that includes the kernel.
I recommend you to run emerge with --keep-going to have the package manager continue after an error with the remaining packages.
4) When each package is done compiling, it's installed automatically. There are no automatic reboots or anything. The files are replaced live, both executables and libraries. Running services continue to use old files from memory until you restart them or reboot manually - they will appear red/yellow in htop until you do.
There were a few times, very very few, when I had crashes in new packages that were succesfuly built. It only happened on armv7, which is a practically abandoned platform everywhere. In those cases you can revert to the old ones and mask the bugged version to prevent it from being updated to next time.
5) Last step is to review the config changes. dispatch-conf will present a diff of all proposed changes to .ini and .cfg files for all updated packages. You get to review, accept, reject the changes or manually edit the files.
"Mostly because of the continuous attempts to force Copilot usage for our repositories, Gentoo currently considers and plans the migration of our repository mirrors and pull request contributions to Codeberg."
Reading this while doing emerge @world on my personal workstation, and preparing a fresh annual portage cut for our IT infrastructure (some 600+ VMs, 400+ bare metal servers), running Gentoo.
I used Gentoo from 2006 for a decade or more and loved it. Later I got more into embedded systems and low compute hardware and flirted with other distros. Gentoo is still running on my server but desktop and notebook are now on more conventional distros.
Gentoo is the best! Once you get the hang of creating a bootable system and feel comfortable painting outside the lines, it feels like Linux from Scratch just without needing to manually build everything. I automated building system images with just podman (to build the rootfs) and qemu (test boot & write the rootfs, foreign arch emulation) and basically just build new system images once a week w/ CI for all my hardware + rsync to update. Probably one of the coolest things I’ve ever built, at this point I’m effectively building my own Linux distro from source and it’s all defined in Containerfiles! I have such affection for the Gentoo team for enabling this project, shocking to discover how little they operate on I’m definitely setting up a recurring donation.
I think it is a great learning opportunity, but after using Gentoo for a decade or so, I prefer Arch these days. So if you want to learn more about Linux and its ecosystems, go for it, do it for a few months or years.
That said, I haven't tried Gentoo with binaries from official repositories yet. Maybe that makes it less time-consuming to keep your system up to date.
Been happily and very successfully using the official binpkgs, it works really well, sometimes there's a slight delay for the binary versions of the source packages to appear in the repositories, but that's about it. I guess it's kind of running Arch, but with portage <3! And the occasional compilation because your use flags didn't really match the binaries
Did you document this somewhere? I'm interested to know more
Nah, first time I’ve mentioned it anywhere. Happy to answer questions, if there’s interest maybe this could be my reason for a first blog post.
I would also be very interested in reading that blog post!
For me, the most underrated takeaway here is the state of RISC-V support.
While other distributions are struggling to bootstrap their package repositories for new ISAs and waiting for build farms to catch up, Gentoo's source based nature makes it architecture agnostic by definition. I applaud the risque team for having achieved parity with amd64 for the @system set. This proves that the meta-distribution model is the only scalable way to handle the explosion of hardware diversity we are seeing post 2025. If you are building an embedded platfrm or working on custom silicon, Gentoo is a top tier choice. You cross-compile the stage1 and portage handles the rest.
Fedora and Debian have been shipping RISC-V versions of stable releases for a while. I don't think anyone is really struggling.
While I was always a sourced-base/personalized distribution personality type, this is also a big part of why I moved to Gentoo in early 2004 (for amd64, not Risc-V / other embedded per your example). While Pentium-IV's very deep pipelines and compiler flag sensitivities (and the name itself for the fastest Penguin) drove the for-speed perception of the compile-just-for-my-system style, it really plays well to all customization/configuation hacker mindsets.
That is a fantastic historical parallel. The early amd64 days were arguably Gentoo's killer app moment. While the binary distributions were wrestling with the logistical nightmare of splitting repositories and figuring out the /lib64 vs /lib standard, Gentoo users just changed their CHOST, bootstrapped and were running 64-bit native. You nailed the psychology of it, too. The speed marketing was always a bit of a red herring. The ability to say "I do not want LDAP support in my mail client" and have the package manager actually respect that is cool. It respects the user's intelligence rather than abstracting it away.
Since you've been on the ride since '04, I'm curious to hear your thoughts. How do you feel the maintenance burden compares today versus the GCC 3.x era? With the modern binhost fallback and the improvements in portage, I feel like we now spend less time fighting rebuild loops than back then? But I wonder if long time users feel the same.
> Since you've been on the ride since '04, I'm curious to hear your thoughts. How do you feel the maintenance burden compares today versus the GCC 3.x era? With the modern binhost fallback and the improvements in portage, I feel like we now spend less time fighting rebuild loops than back then? But I wonder if long time users feel the same.
I'm another one on it since the same era :)
In general stable has become _really_ stable, and unstable is still mostly usable without major hiccups. My maintenance burden is limited nowadays compared to 10y ago - pretty much running `emerge -uDN @world --quiet --keep-going` and fixing issues if any, maybe once a month I get package failures but I run a llvm+libcxx system and also package tests, so likely I get more issues than the average user on GCC.
For me these days it's not about the speed anymore of course, but really the customization options and the ability to build pretty much anything I need locally. I also really like the fact that ebuilds are basically bash scripts, and if I need to further customize or reproduce something I can literally copy-paste commands from the package manager in my local folder.
The project has successfully implemented a lot of by-default optimizations and best practices, and in general I feel the codebases for system packages have matured to the point where it's odd to run in internal compiler errors, weird dependency issues, whole-world rebuilds etc. From my point of view it also helped a lot that many compilers begun enforcing more modern and stricter C/C++ standards over time, and at the same time we got Github, CI workflows, better testing tools etc.
I run `emerge -e1 @world` maybe once a year just to shake out stuff lurking in the shadows (like stuff compiled with clang 19 vs clang 21), but it's really normally not needed anymore. The configuration stays pretty much untouched unless I want to enable a new USE for a new package I'm installing.
> so likely I get more issues than the average user on GCC.
its been years since I had a build failure, and I even accept several on ~amd64. (with gcc)
> The ability to say "I do not want LDAP support in my mail client" and have the package manager actually respect that is cool.
I tried Gentoo around the time that OP started using it, and I also really liked that aspect of it. Most package managers really struggle with this, and when there is configuration, the default is usually "all features enabled". So, when you want to install, say, ffmpeg on Debian, it pulls in a tree of over 250 (!!) dependency packages. Even if you just wanted to use it once to convert a .mp4 container into .mkv.
To be fair it was not that difficult to set create a pure 64 bit binary distro and there were a few of them. The real issue was to figure out how to do mixed 32/64 bit and this is where the fight about /lib directories originated. In a pure 64 bit distro the only way to run 32 bit binaries was to create a chroot with a full 32 bit installation. It took a while before better solutions were agreed to. This was an era of Flash and Acrobat Reader - all proprietary and all 32 bit only so people really cared about 32 bit.
gcc 3.3 to 3.4 was a big thing, and could cause some issues if people didnt follow the upgrade procedures, and also many c++ codebases would need minor adjustments.. this has been much much less of a problem since.
Additionally gentoo has become way more strict with use flag dependencies, and it also checks if binaries are depending on old libs, and doesnt remove them when updating a package, such that the "app depends on old libstdc++" doesnt happen anymore. It then automatically removes the old when nothing needs it anymore
I have been running gentoo since before 04, continously, and things pretty much just work. I would be willing to put money that I spend less time "managing my OS" than most who run other systems such as osx, windows, debian etc. Sure, my cpu gets to compile a lot, but thats about it.
And yes, the "--omg-optimize" was never really the selling point, but rather the useflags, where theres complete control. Pretty much nothing else comes close, and it is why gentoo is awesome
All distributions are source based and bootstrapped from source. They default to binary packages by default (while offering source packages) whereas Gentoo defaults to source packages (but still has binary packages). There's literally no advantage to Gentoo here. What you're saying doesn't even make logical sense.
Other distros don't support Risc-V because nobody has taken the time to bother with it because the hardware base is almost nonexistent.
> The Gentoo Foundation took in $12,066 in fiscal year 2025 (ending 2025/06/30); the dominant part (over 80%) consists of individual cash donations from the community. On the SPI side, we received $8,471 in the same period as fiscal year 2025; also here, this is all from small individual cash donations.
It's crazy how projects this large and influential can get by on so little cash. Of course a lot of people are donating their very valuable labour to the project, but the ROI from Gentoo is incredible compared to what it costs to do anything in commercial software.
This is, in a way, why it's nice that we have companies like Red Hat, SUSE and so on. Even if you might not like their specific distros for one reason or another, they've found a way to make money in a way where they contribute back for everything they've received. Most companies don't do that.
Contribute back how and where? Definitely not to Gentoo if we look at the meagre numbers here.
Red Hat contributes to a broad spectrum of Linux packages, drivers, and of course the kernel itself [1].
One example is virtualization: the virtio stack is maintained by Red Hat (afaik). This is a huge driver behind the “democratization” of virtualization in general, allowing users and small companies to access performant virt without selling a kidney to VMware.
Also, Red Hat contributes to or maintains all of the components involved in OpenShift and OpenStack (one of which is virtio!).
[1] https://lwn.net/Articles/915435/
Why should Red Hat be expected to contribute to Gentoo? A distro is funded by its own users. What distro directly contributes to another distro if it’s not a derivative or something?
Red Hat primarily contributes code to the kernel and various OSS projects, paid for by the clients on enterprise contracts. A paying client needs something and it gets done. Then the rest of us get to benefit by receiving the code for free. It’s a beautiful model.
If you look at lists of top contributors, Red Hat (along with the usual suspects in enterprise) are consistently at the top.
As others mentioned, Red Hat (and SUSE) has been amazing for the overall Linux community. They give back far more than what the GPL requires them to. Nearly every one of their paid "enterprise" products has a completely free and open source version.
For example:
Presumably, contribute to the entire ecosystem in terms of package maintenance and other non-monetary forms.
Red Hat employs a significant number of GCC core devs.
Red Hat contributes a huge amount to the open source ecosystem. They're one of the biggest contributors to the Linux kernel (maybe the biggest).
https://insights.linuxfoundation.org/project/korg/contributo...
It looks like they're second to Intel, at least by LF's metric. That said driver code tends to be take up a lot of space compared to other areas. Just look at the mass of AMD template garbage here: https://github.com/torvalds/linux/tree/master/drivers/gpu/dr...
Intel has long been a big contributor--mostly driver stuff as I understand it. (Intel does a lot more software work than most people realize.) Samsung was pretty high on the list at one point as well. My grad school roommate (now mostly retired though he keeps his hand in) was in the top 10 individual list at one point--mostly for networking-related stuff.
Yes, that would be nice but when I look at their Grub src.rpm for instance, some of those patches would look original but came from Debian.
Back in the day when the boxes were on display in brick-and-mortar stores, SuSE was a great way to get up and running with Linux.
SuSE/openSuSE is innovating plenty of stuff which other distros find it worth to immitate, e.g. CachyOS and omarchy as Arch-derivatives felt that openSuSE-style btrfs snapshots were pretty cool.
It's a rock-solid distro, and if I had a use for enterprise support, I'd probably look into SLES as a pretty serious contender.
The breadth of what they're doing seems unparalleled, i.e. they have rolling release (Tumbleweed), delayed rolling release (Slowroll) which is pretty unique in and of itself, point release (Leap), and then both Tumbleweed and Leap are available in immutable form as well (MicroOS, and Leap Micro respectively), and all of the aforementioned with a broad choice of desktops or as server-focused minimal environments with an impressively small footprint without making unreasonable tradeoffs. ...if you multiply out all of those choices it gives you, it turns into quite a hairy ball of combinatorics, but they're doing a decent job supporting it all.
As far as graphical tools for system administration go, YaST is one of the most powerful and they are currently investing in properly replacing it, now that its 20-year history makes for an out-of-date appearance. I tried their new Agama installer just today, and was very pleased with the direction they're taking.
...so, not quite sure what you're getting at with your "Back in the day..." I, too, remember the days of going to a brick-and-mortar store to buy Linux as a box set, and it was between RedHat and SuSE. Since then, I think they've lost mindshare because other options became numerous and turned up the loudness, but I think they've been quiety doing a pretty decent job all this time and are still beloved by those who care to pay attention.
SUSE has a lot of ex-Red Hatters at high levels these days. Their CEO ran Asia-Pacific for a long time and North America commercial sales for a shorter period.
SUSE has always been pretty big in Europe but never was that prominent in North America except for IBM mainframes, which Red Hat chipped away at over time. (For a period, SUSE supported some mainframe features that Red Hat didn't--probably in part because some Red Hat engineering leadership was at least privately dismissive of the whole idea of running Linux on mainframes.)
I've found openSUSE MicroOS to be a great homelab server OS.
SuSE slowroll is news to me, thanks.
The OpenSUSE Tumbleweed installation on my desktop PC is nearing 2 years now and still rolling. It is a great and somewhat underrated distribution.
Red hat certainly burns a lot of money in service of horrifyingly bad people. It's nice we get good software out of it, but this is not a funding model to glorify. And of course american businesses not producing open source is the single most malignant force on the planet.
> Red hat certainly burns a lot of money in service of horrifyingly bad people.
Red Hat also has a nasty habit of pushing their decisions onto the other distributions; e.g.
- systemd
- pulseaudio (this one was more Fedora IIRC)
- Wayland
- Pipewire (which, to be fair, wasn't terrible by the time I tried it)
Pushing their decisions? This is comical.
I guess Debian, SUSE, Canonical, etc get that email from Red Hat just go along with it. We better make the switch, we don’t want our ::checks notes:: competitor made at us.
systemd and friends go around absorbing other projects by (poorly) implementing a replacement and then convincing the official project to give up.
I don’t know where they come from, but I try to avoid all in that list. To be fair, audio is a train wreck anyway.
Eh, pulseaudio got a lot better, and pipewire "just works" at this point (at least for me). Even Bluetooth audio works OOTB most of the time.
Pipewire rocks. Wayland it's half baked and a disaster on legacy systems. SystemD... openrc it's good enough, and it never fails at shutdown.
It's difficult to infer what kind of nuts is going on here.
If we're going to socialize production, let's do it properly.
I don't know that Red Hat is a positive force. They seem to be on a crusade to make the Linux desktop incomprehensible to the casual user, which I suppose makes sense when their bread and butter depends on people paying them to fix stuff, instead of fixing it themselves.
You don’t know they are a positive force?
This, despite the fact that Rocky, Alma, Oracle Enterprise Linux, etc exist because of the hard work and money spent by Red Hat.
And what are those companies doing to fix this issue you claim Red Hat causes? Nothing. Because they like money, especially when all you have to do is rebuild and put your name on other people’s hard work.
And what exactly is incomprehensible? What exactly is it that they’re doing to the Linux desktop that make it so that people can’t fix their own problems? Isn’t the whole selling point of Rocky and Alma by most integrators is that it’s so easy you don’t need red hat to support it?
Just a note: Rocky and Alma came out of CentOS
I think it's fair to say that Red Hat simply doesn't care about the desktop--at least beyond internal systems. You could argue the Fedora folks do to some degree but it's just not a priority and really isn't something that matters from a business perspective at all.
Can you name a company which does care about the linux desktop? Over the years i’m pretty sure redhat contributed a great deal to various desktop projects, can’t think of anyone who contributed more.
Well Red Hat did make a go at a supported enterprise desktop distro for a time and, as I wrote, Fedora--which Red Hat supports in a variety of ways for various purposes--is pretty much my default Linux distro.
So I'm not being critical. Yes, Red Hat employees do contribute to projects that are most relevant to the desktop even if doing so is not generally really the focus of their day jobs. And, no, other companies almost certainly haven't done more.
Off the top of my head System76 jumps to mind with their hardware and Pop!_OS.
> Can you name a company which does care about the linux desktop?
To some extent Valve. They have to, since the Steam Deck's desktop experience depends on the "Linux desktop" being a good experience.
Fedora is probably the best out-of-the-box desktop experience. Red Hat does great things, even if the IBM acquisition has screwed things up.
I find systemd pleasant for scheduling and running services but enraging in how much it has taken over every other thing in an IMO subpar way.
It's not just systemd, though. You have to look at the whole picture, like the design of GNOME or how GTK is now basically a GNOMEy toolkit only (and if you dare point this out on reddit, ebassi may go ballistics). They kind of take more and more control over the ecosystem and singularize it for their own control. This is also why I see the "wayland is the future", in part, as means to leverage away even more control; the situation is not the same, as xorg-server is indeed mostly just in maintenance work by a few heroes such as Alanc, but wayland is primarily, IMO, a IBM Red Hat project. Lo and behold, GNOME was the first to mandate wayland and abandon xorg, just as it was the first to slap down systemd into the ecosystem too.
The usual semi conspiratorial nonsense. GNOME is only unusable to clickers that are uncomfortable with any UI other than what was perfected by windows 95. And Wayland? Really? Still yelling at that cloud?
I expect people will stop yelling about Wayland when it works as reliably as X, which is probably a decade away. I await your "works for me!" response.
It’s very fair you can say “X works for me” but everyone saying otherwise is in the wrong.
I don't get your point. People regularly complain that Wayland has lots of remaining issues and there are always tedious "you're wrong because it works perfectly for me!" replies, as if the fact that it works perfectly for some people means that it works perfectly for everyone.
These days Wayland is MUCH smoother than X11 even with an Nvidia graphics cards. With X11, I occasionally had tearing issues or other weird behavior. Wayland fixed all of that on my gaming PC.
It’s even more pleasant when you use a distro that natively uses systemd and provides light abstractions on top. One such example is NixOS.
NixOS is anything but a light abstraction (I say this as a NixOS user).
Tbh it feels like NixOS is convenient in a large part because of systemd and all the other crap you have to wire together for a usable (read compatible) Linux desktop. Better to have a fat programming language, runtime and collection of packages which exposes one declarative interface.
Much of this issue is caused by the integrate-this-grab-bag-of-tools-someone-made approach to system design, which of course also has upsides. Redhat seems to be really helping with amplifying the downsides by providing the money to make a few mediocre tools absurdly big tho.
Red Hat pushing for the disaster that is Wayland has set the Linux Desktop back decades.
It is the Microsoft of the Linux world.
Why is Wayland a disaster? Most of the Linux community is strongly in favor of it.
I'm sorry but this is just completely disconnected from reality. Wayland is being successfully used every single day. Just because you don't like something doesn't mean it's inherently bad.
OTOH, not having money also comes with upsides, like not having overpaid CEOs, managers, marketing people, or distracting side projects.
That’s a 20 million dollar problem, but plenty of projects would be better with a few hundred thousand to pay staff and infra.
Our society at its current state will not allow that, however, as it is seen as more important to do stock buybacks and increasing executive pay.
This was exactly what I was going to comment on. Why are they not spending more money?? I don't even know what they should spend it on, but like.. it's Gentoo! I would have thought they'd pay the core devs something?
What money? Doesn't sound like they have anything extra?
Yeah, especially when a CSS library makes $1M a year. I guess they have no incentive to improve funding.
It would be interesting to have a more accurate estimate of the effective cost of maintaining Gentoo. Say 100 core developers spend 10h/week, and 380 external contributors 2h/week; that's well over 40 FTE, and at $150K per FTE that's $6 million a year.
...is Gentoo large and influential these days? As far as I'm aware, its current cultural status is that of a punchline, but I'm open to being corrected.
Gentoo's Portage build system is (or at least was?) part of Google's ChromeOS
Gentoo also runs the backend infra of Sony's Playstation Cloud gaming service
Anecdotal evidence claims it used to also run the NASDAq
Highly unlikely that PSN runs Gentoo. They're using AWS.
I've no idea if Sony uses Gentoo or not, but you can definitely run Gentoo on AWS
ChromeOS is based in Gentoo.
Yes, Gentoo is like NixOS, sort of a meta-distribution.
Being the base of ChromeOS makes it highly influential.
ChromeOS market share is >5% in many countries, sometimes around double digits.
Also curious of Gentoo's influence in 2026.
The issue is that gentoo isn’t very popular in the industry. If it catches on with a few well funded tech companies, then it’s easy to get $10k or so from each one in sponsorships at conferences.
ChromeOS uses Gentoo as a base. That doesn't seem to have helped get them any Google money.
This is a remarkably small number given that Gentoo Portage is load bearing infrastructure under ChromeOS.
And the NASDAQ[0]
[0] https://www.pcworld.com/article/481872/how_linux_mastered_wa...
just typical corporate open source bloodsucking
2025 I switched to nixos and will probably stay. I used gentoo for like 20 years. Its the distro of my heart.
With some notebooks, some of which were getting on in years, it was simply too resource-intensive to update. Only GHC, for example, often took 12+ hours to compile on some older notebooks.
I tried to list available packages on NixOS and nix-env consumed more than 6 GB Ram. Everyone told me not to use nix-env; everyone except NixOS manual. Trying to understand NixOS environment is a deep rabbit hole.
The Nix documentation is what drove me away from it years ago when I tried. I ended up landing on GNU Guix, where I have been for about 5 years now. I found the OS documentation to be much nicer (info pages!) and the decades of Scheme documentation makes the language easier to pick up too.
Been using Gentoo since 2004 on all my machines. They won me over after I started playing around with their Unreal Tournament demo ISO.
The game changer for me was using my NAS as a build host for all my machines. It has enough memory and cores to compile on 32 threads. But a full install from a stage3 on my ageing Thinkpad X13 or SBCs would fry the poor things and just isn't feasible to maintain.
I have systemd-nspawn containers for the different microarchitectures and mount their /var/cache/binpkgs and /etc/portage dirs over NFS on the target machines. The Thinkpad can now do an empty tree emerge in like an hour and leaving out the bdeps cuts down on about 150 packages.
Despite being focused on OpenRC, I have had the most pleasant experience with systemd on Gentoo over all the other distros I've tried.
I'm so interested to learn more about this. Do you still run all your emerge commands on the thinkpad? What's the benefit of mounting /etc/portage over nfs?
I have this dream of moving all my ubuntu servers to gentoo but I don't have a clear enough picture of how to centralize management of a fleet of gentoo machines
Gentoo has many smart people. Having said that, I can't help but feel that ever since the rise of Arch, Gentoo lost a lot of grounds. This may not be primarily due to Arch, but it kind of felt that way to me. I feel that the Gentoo devs should really look at its main competitors such as Void or Arch, IMO. These seem to be more like a modern Gentoo, even if they are different and have a different focus too.
Neither Void or Arch are a "modern Gentoo". Gentoo is it's own thing. If anything, Gentoo's closest "competitors" in terms of OS customisation would be NixOS or Guix, not Void or Arch, but Gentoo is forging it's own path, it doesn't need to follow any other distro.
I have heard rumors that at one point in time gentoo lost its forum - basically a catastrophic strike such as deleting Arch Linux wiki
Arch is the reason I didn't choose Gentoo for my latest build. It's convenient and "good enough" for all my use-cases. Gentoo gives you the feeling of being fully connected to the computer like no other OS - the kind that leaves you nostalgic - but it also requires a time commitment.
Really hope I can return to Gentoo soon. It was just the most stable and most hacker friendly distro Ive ever used. Hats off to all the contributors!
I used Gentoo for ten years (2005–2015), and I was very happy with it! Stable was not the word I would use, in that updating frequently broke and required manual intervention. But it was so flexible! The easily accessible options one has for choosing everything about the system is unparalleled in any system I have tried since. I would still use it if I had more tinkering time. These days I am on NixOS, mostly to have the same setup on every machine I use.
I think Gentoo is very stable, but you have to make use of revdep-rebuild and know what you are doing (meaning: it is easy to shoot yourself in the foot).
I've been on Gentoo for my gaming desktop for like 2-3 years now and I don't think I've ever had an update break anything.
I will say though that my valgrind is broken due to march native. :)
Hah, same! NixOS is perfect for me; I love the declarative aspect. But Portage is far-and-away the best traditional package manager I've ever used. It's truly phenomenal.
What Gentoo really needs is an official immutability mechanism like ostree used by Fedora Silverblue or ZFS/btrfs snapshots of the root/boot volumes. This way the ever-experimental nature of the distro would be compensated by having an easy mechanism to rollback to previous known-good builds.
I haven't used it in years, but when I was first using Linux I used Gentoo for a long time. Building Gentoo from scratch really helped me learn a lot and probably more quickly than dual-booting a system like I had been. I'll always have a soft spot for Gentoo.
I used to run gentoo like 14 years ago! It remains one of the fastest distros I've seen for the specific hardware it was running on (high core count 4-socket AMD opteron servers) and I mostly attributed that to the fact it was compiling everything (even the base os in this case!) for that specific CPU at install time... emerge would build/compile and if you set your USE flags correctly it produced heavily tailored and optimized binaries. I feel like a staged/graduated (downloading/running precompiled initially while a flag-optimized compile runs in the background) would be a good way to get around some of the downsides here (namely that it takes 45 minutes to install firefox with emerge/pacman and that builds fail more often than packages fail to install).
Very cool to see that it's still going strong - I remember managing many machines at scale was a bit of a challenge, especially keeping ahead of vulnerabilities.
45 minutes hah, it used to take us three days to build kde ;)
Interesting name for the jobs erver: “steve”
Thanks, steve!
I saw a comment in a "I moved from Windows to Linux" thread implying Windows has more configuration potential than Linux. I wonder what that commenter would make of Gentoo.
I wish I had more time I could dedicate to maintaining my system, I'm marooned on Arch due to lack of time, such a shame.
Impressive recap! The work on RISC-V images, Gentoo for WSL, and EAPI 9 really shows how adaptable Gentoo is. I’m curious about the trend of fewer commits and bug reports—do you think it’s just natural stabilization, or are contributors slowing down? Also, the move from GitHub to Codeberg is bold; how is the community reacting to that change so far? Would love to hear more about how new contributors are finding the transition and onboarding with these updates.
I used Gentoo back in 2003. It’s nice to see that it’s still going strong. I don’t have as much free time now it’s not the distro for me, but perhaps when I retire I will come back to it.
How easy is it to administer gentoo servers? Is it on-par with nix/arch or harder?
I have not directly used Gentoo in years. It was chosen so I could learn, maximize system performance, and have proper AMD64 support before the other distros supported the new CPU specs. Gentoo also had the best documentation in those years.
Id Software provided a Doom 3 Linux client when the game was first released. I found Doom 3 ran better on a custom built Gentoo Linux system compared to Windows XP.
Are you look at Gentoo to maximize performance with compiling everything with custom build parameters and kernel configuration versus pre-built binaries and a generic kernel loaded with modules?
Custom Gentoo just adds more time with having to wait to install software upgrades. It is like having all your Arch packages only being provided by AUR. There is also a chance the build will fail and the parameters might need to be changed. Majority of the time everything compiles without issue once the build parameters are figured out. It was rare when something did not.
Tecnically with just a kernel optimized for your CPU, realtime patches, NTSync and a custom MESA build (with -O2 and -march set to your CPU) would give a good boost instead of trying to recompile verything.
In my experience (this was about 5 years ago mind you) it was no more complex than an arch installation, but with a smaller community and less documentation.
General administration is similar to Arch or any other regular distro. Package updates necessarily take longer because of recompiling but that's just CPU time. There are precompiled versions of big popular binaries (open office, Firefox, etc) that allow you to save a lot of time if you want.
Where you lose time is in trying to optimize your system and packages using the multiple switches that Gentoo provides. If you're the OCD twiddler type, Gentoo can be both extremely satisfying and major time sink.
I don't understand the time sink. Isn't spending time knowing intricate details about your system a good thing? You know better than most if you've gone that deep.
TLDR: Installation is a pain, initial configuration is a pain and there's always something more to tweak, update is a lesser pain, but still a pain. But it's fun, BDSM-style...
Installation is done by booting a liveCD, manually partitioning your storage, unpacking a Gentoo STAGE3 archive, chrooting in it, doing basic configuration such as network, timezone, portage (package manager) base profile and servers, etc., compiling and installing a kernel and then rebooting into the new system.
Then you get to play with /etc/portage/make.conf which is the root configuration of the package manager. You get to set CPU instruction sets (CPU_FLAGS), gcc CFLAGS flags, MAKE flags, video card targets, acceptable package licenses, global USE flags (those are simplified ./configure arguments that usually apply to several packages), which Apache modules get built, which qemu targets get built, etc. These are all env vars that portage (the package manager) uses to build packages for your system.
The more you use Gentoo, the more features of make.conf you discover. Never ending fun.
Then, you start installing packages and updates (same procedure):
1) You start the update by reviewing USE flags for each added/updated package - several screens of dense text.
For example, PHP has these USE flags: https://packages.gentoo.org/packages/dev-lang/php - mouse hover to see what they do. You get to play with them in /etc/portage/package.use and there's no end to tweaking them.
If you have any form of OCD, stay away from Gentoo or this will be your poison forever!
2) Then the compilation begins and that takes hours or days depending on what you install and uses a lot of CPU and either storage I/O or memory (if you have lots of memory, you can compile in a tmpfs a lot faster).
I'm not sure it is OK to compile the updates on a live server, especially during busy hours, but Gentoo has alternatives, including binary packages (recently added, but must match your USE flags with theirs), building packages remotely on another system (distcc), even on a different arch (crossdev). You could run an ARM server and build packages for it on a x86 workstation. I didn't use "steve", so I can't tell you what wonderful things that tool can do, yet.
3) Depending on architecture, some less used packages may fail to compile. You get to manually debug that and submit bug reports. You can also add patches to /etc/portage/patches/<package> that will automatically be applied when the package is built, and that includes the kernel.
I recommend you to run emerge with --keep-going to have the package manager continue after an error with the remaining packages.
4) When each package is done compiling, it's installed automatically. There are no automatic reboots or anything. The files are replaced live, both executables and libraries. Running services continue to use old files from memory until you restart them or reboot manually - they will appear red/yellow in htop until you do.
There were a few times, very very few, when I had crashes in new packages that were succesfuly built. It only happened on armv7, which is a practically abandoned platform everywhere. In those cases you can revert to the old ones and mask the bugged version to prevent it from being updated to next time.
5) Last step is to review the config changes. dispatch-conf will present a diff of all proposed changes to .ini and .cfg files for all updated packages. You get to review, accept, reject the changes or manually edit the files.
That's all. Simple. :)
That's a very well painted picture for what to expect. I am gonna try it soon, since it's been on my task list for too long. Thanks :)
Been running it for over 20 years:
https://blog.nawaz.org/posts/2023/May/20-years-of-gentoo/
Prior HN discussion: https://news.ycombinator.com/item?id=35989311
Edit: Curious, why the downvote?
I'm more amazed you run the same machine for 20 years. Another 20+ years user, but I've reinstalled 5-6 times when I change laptops.
No, I changed the machine, but just installed Gentoo every time. I merely kept the emerge.logs from each machine.
> why the downvote?
I can see no reason for it.
Odd this came up as I am considering revisiting Gentoo. I might have to take this as a sign.
"Mostly because of the continuous attempts to force Copilot usage for our repositories, Gentoo currently considers and plans the migration of our repository mirrors and pull request contributions to Codeberg."
Reading this while doing emerge @world on my personal workstation, and preparing a fresh annual portage cut for our IT infrastructure (some 600+ VMs, 400+ bare metal servers), running Gentoo.
Any more information on the Github move (away)? While the AI features of github are annoying, I've so far been able to completely ignore them.
I still send PR’s for ::gentoo to their github mirror, I would be surprised if they shut this off.
I used Gentoo from 2006 for a decade or more and loved it. Later I got more into embedded systems and low compute hardware and flirted with other distros. Gentoo is still running on my server but desktop and notebook are now on more conventional distros.
From the announcement it’s a lot of unnecessary philosophical moves and less innovation moves. I like innovative Linux, but that’s just my opinion.