Seems sensible. Only 2.6% of users (with telemetry enabled) are using 32-bit Windows while 6.4% are using 32-bit Firefox on 64-bit Windows[0]. 32-bit Linux might see more use however isn't included in the stats, only 5.3% of users are running Linux[1] and I doubt many enable telemetry.
Maybe they could also drop support for older x86_64 CPU's, releasing more optimised builds. Most Linux distributions are increasing their baseline to x84-64-v2 or higher, most Firefox users (>90%)[0] seem to meet at least x84-64-v2 requirements.
They aren't ending support for 32-bit Windows. If the ratio of 32/64 bit users on Linux matched those on Windows, then this would affect 0.5% of their users.
Abandon is too strong a word. I imagine most people who are still using 32 bit operating systems aren't too concerned about getting the very latest version of firefox either.
These things that look like institutions, that look like bricks carved from granite, are just spinning plates that have been spinning for a few years.
When I fight glibc dependency hell across Ubuntu 22 and Ubuntu 24, I sympathize with Firefox choosing to spin the 64-bit plates and not the 32-bit plates.
If I were a product decision maker, I’d be ok with that. It’d have to be a very unusual niche to make it worth the engineering effort to support customers who only run decades-old hardware.
Employees: “We want to use new feature X.”
Boss: “Sorry, but that isn’t available for our wealthy customers who are stuck on Eee PCs.”
within those numbers are people who don't really have a preference one way or another, and just didn't bother to upgrade. I have to imagine that the group of people that must use 32-bit and need modern features is vanishingly small.
I would bet a lot of those folks are running embedded linux environments. Kiosks, industrial control, signage, NUCs, etc. I know that as of about 6 years ago I was still working with brand-new 32-bit celeron systems for embedded applications. (Though those CPUs were EOL'd and we were transitioning to 64-bit)
6 years ago was 2019. You were working in 2019 with "brand new 32-bit-only Celerons" which had no 64 bit support?!
Nah mate, something doesn't add up. I can't buy this. Even the cheapest Atoms had 64bit support much earlier than that and Atoms were lower tier silicone than Celeron so you can't tell me Intel had brand new 32 bit only Celerons in 2019.
My Google-fu found the last 32-bit only chips intel shipped were the Intel Quark embedded SoCs EoL in 2015. So what you're saying doesn't pass the smell test.
May have been 2018. Definitely not that long before covid. Suppliers in the embedded space will stockpile EOL parts for desperate integrators such as ourselves, and can continue to supply new units for years after they're discontinued. The product needed a custom linux kernel compile and it took a while to get that working on 64-bit and we had to ship new units. Yes the COGS get ridiculous.
Sure, but in that case it probably wasn't a Celeron, and there's industrial players still keeping 386 systems alive for one reason or another, but it feels in bad faith to call it "brand new" when it's actually "~10 year old, new old stock". Do you know what I mean?
Maybe it's the language barrier since I'm not a native English speaker but where I'm from the phrase "brand new" means something different, it means something that just came onto the market very recently, not something that came on the market 10+ years ago but was never opened from the packaging. That's no longer means "brand new", it means "old but never used/opened". Very different things.
So when you tell me "brand new 32 bit Celeron" it is understood as "just came onto the market".
Am I right or wrong with this understanding?
>My point is this stuff is still in play in a lot of places.
I spent ~15 years in embedded and can't concur on the "still in play in a lot of places" part, but I'm not denying some users can't still exists out there, however I'm sure we can probably count them on very few fingers since Intel's 32 bit Embedded chips never had much traction to begin with.
I've never understood 'brand new' to imply anything about freshness. But according to Mirriam-Webster it means both in different, but very similar, contexts.
The distinction in English might be more in "new" versus "used". And yes, that is inconsistent, you would think "new" versus "old" and "used" versus "unused". But alas :)
As in, a product that was manufactured, kept in its original packaging, and "unopened and unused".
(Although there's some allowances for the vendor to test because you don't want to buy something DOA.)
(Although I won't get too angry for someone saying "brand new." "New old stock" is kind of an obscure term that you don't come across unless you're the kind of person who cares about that kind of thing.)
I think that’s the right way to look at it. If you want a 32 bit system to play with as a hobby, you know you’re going to bump into roadblocks. And if you’re using a 20 year old system for non-hobby stuff, you already know there’s a lot of things that don’t work anymore.
>And if you’re using a 20 year old system for non-hobby stuff, you already know there’s a lot of things that don’t work anymore.
Mate, 20 year old system means a Pentium 4 Prescott and Athlon 64, both of which had 64 bit support. And in another year after we already had dual core 64 bit CPUs.
So if you're stuck on 32 bit CPUs then your system is even older than 20 years.
That was a transitional time. Intel Core CPUs launched as 32 bit in 2006, and the first Intel Macs around then used them. OS X Lion dropped support for 32 bit Intel in 2011.
So you could very well have bought a decent quality 32 bit system after 2005, although the writing was on the wall long before then.
>So you could very well have bought a decent quality 32 bit system after 2005, although the writing was on the wall long before then.
Not really. With the launch of Athlon 64, AMD basically replaced all their 32bit CPUs lineups with that new arch, and not kept them along much longer as a lower tier part. By 2005 I expect 90% of new PCs sold were already 64 bit ready.
> By 2005 I expect 90% of new PCs sold were already 64 bit ready.
You're several years off:
"The FIRST processor to implement Intel 64 was the multi-socket processor Xeon code-named Nocona in June 2004. In contrast, the initial Prescott chips (February 2004) did not enable this feature."
"The first Intel mobile processor implementing Intel 64 is the Merom version of the Core 2 processor, which was released on July 27, 2006. None of Intel's earlier notebook CPUs (Core Duo, Pentium M, Celeron M, Mobile Pentium 4) implement Intel 64."
Mozilla is in extremely dire straits right now, so unless this "lot of people" make a concerted donation effort to keep the lights on I would be hardly shocked by the sunsetting.
I’d have to agree. I doubt there are that many (in relative terms) people browsing the web on 32-bit CPUs and expecting modern experiences. I’ve gotta imagine it would be pretty miserable, what with the inherent RAM limitations on those older systems, and I’m sure JavaScript engines aren’t setting speed records on Pentium 4s.
Yeah consumer CPUs have been 64-bit since what, the PowerPC G5 (2003), Athlon 64 (2003), and Core 2 (2006)? There were a few 32-bit x86 CPUs that were released afterward, but they were things like Atoms which were quite weak even for the time and would be practically useless on the modern internet.
More generally I feel that Core 2 serves as a pretty good line in the sand across the board. It’s not too hard to make machines of that vintage useful, but becomes progressively challenging with anything older.
I have to imagine that group to be pretty small by now, though. Most PCs with specs good enough to still be useful now run W7 64-bit about as well as they do XP SP3 (an old C2D box of mine feels great under 7, as an anecdote), and for those who’ve been running Linux on these boxes there’s not really much reason to go for a 32-bit build over a 64-bit one.
I mentioned elswewhere, but Apple started selling 32 bit Core (not Core 2) MacBook Pros in 2006. Those seemed dated even at the time. I’d call them basically the last of the 32 bit premium computers from a major vendor.
Frankly, anything older than that sucks so much power per unit of work that I wouldn’t want to use them for anything other than a space heater.
32 bit Atom netbook. I use offpunk, gopher://magical.fish among tons of services (and HN work straight there) and gemini://gemi.dev over the gemini protocol with Telescope as the client. Mpv+yt-dlp+streamlink complement the video support. Miserable?
Go try browsing the web without UBlock Origin today under an i3.
I haven’t tried it, but as bloated as the web is, I don’t think it’s so bad that you need gigabytes of memory or a blazing fast CPU to e.g. read a news website.
As long as you don’t open a million tabs and aren’t expecting to edit complex Figma projects, I’d expect browsing the web with a Pentium + a lightweight distro to be mostly fine.
Idk, I think this is sad. Reviving old hardware has long been one thing Linux is really great at.
Huh. The reason I'm surprised is that I'm able to comfortably browse the web in virtual machines with only 1 cpu core and 2 gb of memory, even when the VM is simultaneously compiling software. This is only opening 1-2 tabs at a time, mind you.
A lightweight Linux desktop still works fine for very basic tasks with 4GB RAM, and that's without even setting up compressed RAM swap. The older 2GB netbooks and tablets might be at the end of the road, though.
I don’t think web browsing is a very basic task anymore, though. A substantial portion of new sites are React SPAs with a lot of client processing demands.
The other issue AIUI and a more pressing one, is that the browser itself is getting a lot heavier wrt. RAM usage, even when simply showing a blank page. That's what ends up being limiting for very-low-RAM setups.
Chromium has a --light switch. And, for the rest, git-clone gopher://bitreich.org/privacy-haters, look up the Unix script for Chrome variables and copy the --arguments stuff for the Chromium launcher under Wndows. Add UBock origin too.
I have a ThinkPad X1 Gen3 Tablet (20KK) here for my Windows needs, my daily driver is a M2 MBA, and my work machine is a 2019 16-inch MBP (although admitted, that beast got an i9...).
Got the Thinkpad for half the ebay value on a hamfest. Made in 2018-ish, i5-8350U CPU... It's a nice thing, the form factor is awesome and so is the built-in LTE modem. The problem is, more than a dozen Chrome tabs and it slows to a crawl. Even the prior work machine, a 2015 MBP, performed better.
And yes you absolutely need a beefy CPU for a news site. Just look at Süddeutsche Zeitung, a reputable newspaper. 178 requests, 1.9 MB, 33 seconds load time. And almost all of that crap is some sort of advertising - and that despite me being an actually subscribed customer with adblock enabled on top of that.
- either copy the chromium file to /etc/profile.d/chromium.sh under GNU/Linux/BSD and chmod +x it, or copy the --arguments array to the desktop launcher path under Windows, such as C:\foo\bar\chrome.exe" --huge-ass-list-of-arguments-copied-there.
Altough this is HN; so I would just suggest disabling JS under the UBo settings and enabling the advanced settings under Ubo. Now, click on the UBo origin and mark the 3rd party scripts and such in red; and leave out the 1st party images/requests and enabled. Then, starting accepting newspapers' domains and CDN's until it works. The CPU usage will plummet down.
> Maybe they could also drop support for older x86_64 CPU's, releasing more optimised builds
Question: Don't optimizers support multiple ISA versions, similar to web polyfill, and run the appropriate instructions at runtime? I suppose the runtime checks have some cost. At least I don't think I've ever run anything that errored out due to specific missing instructions.
but generally it is rare to see higher than x86-64-v3 as a requirement, and that works with almost all CPUs sold in the past 10+ years (Atoms being prominent exception).
A CMPXCHG16B instruction is going to be faster than a function call; and if the function is inlined there's still binary size cost.
The last processor without the CMPXCHG16B instruction was released in 2006 so far as I can tell. Windows 8.1 64-bit had a hard requirement on the CMPXCHG16B instruction, and that was released in 2013 (and is no longer supported as of 2023). At minimum Firefox should be building with -mcx16 for the Windows builds - it's a hard requirement for the underlying operating system anyway.
Let me play devil's advocate: for some reason, functions such as strcpy in glibc have multiple runtime implementations and are selected by the dynamic linker at load time.
And there's a performance cost to that. If there was only one implementation of strcpy and it was the version that happens to be picked on my particular computer, and that implementation was in a header so that it could be inlined by my compiler, my programs would execute faster. The downside would be that my compiled program would only work on CPUs with the relevant instructions.
You could also have only one implementation of strcpy and use no exotic instructions. That would also be faster for small inputs, for the same reasons.
Having multiple implementations of strcpy selected at runtime optimizes for a combination of binary portability between different CPUs and for performance on long input, at the cost of performance for short inputs. Maybe this makes sense for strcpy, but it doesn't make sense for all functions.
You can't really state this with any degree of certainty when talking about whole-program optimization and function inlining. Even with LTO today you're talking 2-3% overall improvement in execution time, without getting into the tradeoffs.
Typically, making it possible for the compiler to decide whether or not to inline a function is going to make code faster compared to disallowing inlining. Especially for functions like strcpy which have a fairly small function body and therefore may be good inlining targets. You're right that there could be cases where the inliner gets it wrong. Or even cases where the inliner got it right but inlining ended up shifting around some other parts of the executable which happened to cause a slow-down. But inliners are good enough that, in aggregate, they will increase performance rather than hurt it.
> Even with LTO today you're talking 2-3% overall improvement in execution time
Is this comparing inlining vs no inlining or LTO vs no LTO?
In any case, I didn't mean to imply that the difference is large. We're literally talking about a couple clock cycles at most per call to strcpy.
What I was trying to point out is that you're essentially talking about LTO. Getting into the weeds, the compiler _can't_ optimize strcpy(*) in practice because its not going to be defined in a header-only library, it's going to be in a different translation unit that gets either dynamically or statically linked. The only way to optimize the function call is with LTO - and in practice, LTO only accounts for 2-3% of performance improvements.
And at runtime, there is no meaningful difference between strcpy being linked at runtime or ahead of time. libc symbols get loaded first by the loader and after relocation the instruction sequence is identical to the statically linked binary. There is a tiny difference in startup time but it's negligible.
Essentially the C compilation and linkage model makes it impossible for functions like strcpy to be optimized beyond the point of a function call. The compiler often has exceptions for hot stdlib functions (like memcpy, strcpy, and friends) where it will emit an optimized sequence for the target but this is the exception that proves the rule. In practice, the benefits of statically linking in dependencies (like you're talking about) does not have a meaningful performance benefit in my experience.
(*) strcpy is weird, like many libc functions its accessible via __builtin_strcpy in gcc which may (but probably won't) emit a different sequence of instructions than the call to libc. I say "probably" because there are semantics undefined by the C standard that the compiler cannot reason about but the linker must support, like preloads and injection. In these cases symbols cannot be inlined, because it would break the ability of someone to inject a replacement for the symbol at runtime.
> What I was trying to point out is that you're essentially talking about LTO. Getting into the weeds, the compiler _can't_ optimize strcpy(*) in practice because its not going to be defined in a header-only library, it's going to be in a different translation unit that gets either dynamically or statically linked.
Repeating the part of my post that you took issue with:
> If there was only one implementation of strcpy and it was the version that happens to be picked on my particular computer, and that implementation was in a header so that it could be inlined by my compiler, my programs would execute faster.
So no, I'm not talking about LTO. I'm talking about a hypothetical alternate reality where strcpy is in a glibc header so that the compiler can inline it.
There are reasons why strcpy can't be in a header, and the primary technical one is that glibc wants the linker to pick between many different implementations of strcpy based on processor capabilities. I'm discussing the loss of inlining as a cost of having many different implementations picked at dynamic link time.
Linux kernel has an interesting optimization using the ALTERNATIVE macro, where you can directly specify one of two instructions and it will be patched at runtime depending on cpu flags. No function calls needed (although you can have a function call as one of the instructions). It's a bit more messy in userspace where you have to respect platform page flags, etc. but it should be possible.
It's not that uncommon to run one system on multiple CPUs. People swap out the CPU in their desktops, people move a drive from one laptop to another, people make bootable USB sticks, people set up a system in a chroot on a host machine and then flash a target machine with the resulting image.
As far as I can tell, GCC supports compiling multiple versions of a function, but can't automatically decide which functions to do that for, or how many versions to build targeting different instruction set extensions. The programmer needs to explicitly annotate each function, meaning it's not practical to do this for anything other than obvious hot spots.
You can do that to some limited degree, but not really.
There are more relevant modern examples, but one example that I really think illustrates the issue well is floating point instructions. The x87 instruction set is the first set of floating point instructions for x86 processors, first introduced in the late 80s. In the late 90s/early 2000s, Intel released CPUs with the new SSE and SSE2 extensions, with a new approach to floating point (x87 was really designed for use with a separate floating point coprocessor, with a design that's unfortunate now that CPUs have native floating point support).
So modern compilers generate SSE instructions rather than the (now considered obsolete) x87 instructions when working with floating point. Trying to run a program compiled with a modern compiler on a CPU without SSE support will just crash with an illegal instruction exception.
There are two main ways we could imagine supporting x87-only CPUs while using SSE instructions on CPUs with SSE:
Every time the compiler wants to generate a floating point instruction (or sequence of floating point instructions), it could generate the x87 instruction(s), the SSE instruction(s), and a conditional branch to the right place based on SSE support. This would tank performance. Any performance saving you get from using an SSE instruction instead of an x87 instruction is probably going to be outweighed by the branch.
The other option is: you could generate one x87 version and one SSE version of every function which uses floats, and let the dynamic linker sort out function calls and pick the x87 version on old CPUs and the SSE version on new CPUs. This would more or less leave performance unaffected, but it would, in the worst case, almost double your code size (since you may end up with two versions of almost every function). And in fact, it's worse: the original SSE only supports 32-bit floats, while SSE2 supports 64-bit floats; so you want one version of every function which uses x87 for everything (for the really old CPUs), one version of every function which uses x87 for 64-bit floats and SSE for 32-bit floats, and you want one function which uses SSE and SSE2 for all floats. Oh, and SSE3 added some useful functions; so you want a fourth version of some functions where you can use instructions from SSE3, and use a slower fallback on systems without SSE3. Suddenly you're generating 4 versions of most functions. And this is only from SSE, without considering other axes along which CPUs differ.
You have to actively make a choice here about what to support. It doesn't make a sense to ship every possible permutation of every function, you'd end up with massive executables. You typically assume a baseline instruction set from some time in the past 20 years, so you're typically gonna let your compiler go wild with SSE/SSE2/SSE3/SSE4 instructions and let your program crash on the i486. For specific functions which get a particularly large speed-up from using something more exotic (say, AVX512), you can manually include one exotic version and one fallback version of that function.
But this causes the problem that most of your program is gonna get compiled against some baseline, and the more constrained that baseline is, the more CPUs you're gonna support, but the slower it's gonna run (though we're usually talking single-digit percents faster, not orders of magnitude faster).
The only thing that comes to mind is some form of atomic instructions that need to interact with other code in well defined ways. I don't see how you could polyfill cmpxchg16b for example.
Probably less, not more. Many distros either stopped supporting 32bit systems, or are planning to. As the announcement says, that's why they're stopping support now.
Less than 2.6% of browser users (with telemetry enabled) are using Firefox. Should the web drop support for Firefox? Seems sensible. (I would hope not)
It'd be ~0.1% of Firefox users that use 32-bit Linux, extrapolating from e2le's statistics, not 2.6%. Have to draw the line at some point if an old platform is becoming increasingly difficult to maintain - websites today aren't generally still expected to work in IE6.
Why is it reasonable? I understand that it would be financially reasonable for a commercial endeavour, but isn't Firefox more like an open source project?
Debian supports MIPS and SPARC still. Last I checked OpenSSL is kept buildable on OpenVMS. Surely there must be a handful of people out there who cares about good old x86?
If your numbers are correct, there are millions if not tens of millions of Firefox users on 32-bit. If none of them are willing to keep Firefox buildable, there must be something more to it.
Debian has stopped supporting x86 32bits recently, Chrome did so 9 years or so ago.
We've carefully ran some numbers before doing this, and this affects a few hundreds to a few thousand people (hard to say, ballpark), and most of those people are on 64bits CPUs, but are using a 32bits Firefox or 32bits userspace.
The comparatively high ratio of 32bits users on Windows is not naively applicable to the Linux Desktop population, that has migrated ages ago.
To expand a bit on that, the i386 support that was recently deprecated to "partially supported" in Debian refers to project status. Unsupported architecture can not be considered blockers, for example. The packages are still being built, are published on the mirrors, and will be for the forseeable future as long as enough people care to keep it alive.
That's the specific meaning of support that it was my intention to point out. Free software projects usually do not "support" software in the commercial sense, but consider platforms supported when there are enough persons to keep the build alive and up to date with the changing build requirements etc. It was my expectation that Firefox was more like free software project than a commercial product, but perhaps that is not the case?
Commercial products have to care about not spreading their resources thin, but for open source cause and effect are the other way around: The resources available is usually the incoming paramter that decides what is possible to support. Hence my surprise that not enough people are willing to support a platform that has thousands of users and isn't particularly exotic, especially compared to what mainstream distributions like Debian already build.
The last release to support 32-bit x86 hardware for popular distros was:
Distro | Release | Support | Extended Support
-------------|---------|---------|------------------
SLES 11 | 2009-03 | 2019-03 | 2022-03 | 2028-03
RHEL 6 | 2010-11 | 2019-08 | 2024-06 | 2029-05
Arch | 2017-11 | *Ongoing releases via unofficial community project
Ubuntu 18.04 | 2018-04 | 2023-05 | 2028-04 | 2030-04
Fedora 31 | 2019-10 | 2020-11 | N/A
Slackware 15 | 2022-02 | Ongoing, this is the most recent release
Debian 12 | 2023-06 | 2026-06 | 2028-06
Gentoo | Ongoing
By the time FireFox 32-bit is dropped, all the versioned distros will be past their general support date and into extended support, leaving Gentoo, Arch32, and a handful of smaller distros. Of course, there are also folks running a 64-bit kernel with 32-bit Firefox to save memory.
Seems reasonable by Mozilla, to me, given precedents like the new Debian release not doing 32-bit release builds.
And doing security updates on ESR for a year is decent. (Though people using non-ESR stream builds of Firefox will much sooner have to downgrade to ESR, or be running with known vulnerabilities.)
If it turns out there's a significant number of people who really want Firefox on 32-bit x86, would it be viable for non-Mozilla volunteers to fork the current ESR or main stream, do bugfixes, backport security fixes, and distribute that unofficial or rebranded build?
What about volunteers trying to keep the main stream development backported? Or is that likely to become prohibitively hard at some point? (And if likely to become too hard, is it better to use that as a baseline going forward with maintenance, or to use the ESR as that baseline?)
>[Updated on 2025-09-09 to clarify the affected builds are 32-bit x86]
That's nice... When this was originally posted on 09-05 it just mentioned "32-bit support", so I'd been worried this would be the end of me using FF on a Microsoft Surface RT (armv7, running Linux).
Does this mean they are deleting a bunch of code, or just that people will have to compile it manually? I'd imagine there is a lot of 32-bit specific code, but how much of that is 32-bit-Linux specific code?
I'm honestly surprised just about anything supports 32-bit these days.
It's fine to keep hosting the older versions for download, and pointing users to it if they need it. But other than that, I see 0 reason to be putting in literally any effort at all to support 32-bit. It's ancient and people moved on like what, well over a decade and a half ago?
If I were in charge I'd have dropped active development for it probably 10 years ago.
I don't know if you can draw any good conclusions from that on a linux install.
Seems just as likely that after your 64 bit install switch, 32 bit libraries it was depending on were missing.
Linux distros don't tend to be as obsessive at maintaining full 32 bit support as the Windows world.
A better test would be to fire up a 32 bit VM and see if Firefox 32 bit crashed there...
If they made it as far as being able to lose something they were working on, then it's less likely to have been a missing library problem. But I don't know what it was; people do successfully run Firefox on 32-bit Linux.
Firefox does have problems with old profiles, though. I could easily see crud building up there. I don't think Firefox is very good about clearing it out (unless you do the refresh profile thing). You could maybe diagnose it with about:memory, if you were still running that configuration.
If the libraries were missing entirely, I'm not sure 32-bit Firefox would even start. But if they were present and nothing was keeping them updated (pretty likely on an otherwise 64-bit system), they'd pretty likely become out of date -- which could certainly explain spurious crashes.
Fair point, although Firefox also launches subprocesses, and I don't know if those use same libraries as the main process. And I also don't know if it dynamically loads supporting libs after launch.
It's Debian, it can handle 32 bit and 64 bit applications at the same time, and the package manager makes sure you have all the dependencies.
I didn't change libraries, it was a gradual switch where you convert applications to 64 bit - and I didn't think to do Firefox, but it wasn't missing 32 bit libraries.
It was simply a profile that I'd been continuously using since approx 2004, and it was probably too large to fit in 32 memory anymore, or maybe Firefox itself needed more memory and couldn't map it. (The system had a 64 bit kernel, so it wasn't low on RAM), but 32 bit apps are limited to 2/3/4GB.
Possible I suppose. You can restrict firefox memory usage in the config. Perhaps their dynamic allocation was getting confused by what was available on the 64 bit machine? Still. Why would any 32 bit app even try to allocate more than it actually could handle.
I dunno. I'm still inclined to think missing libs (or out of date libs) - but hard to say without a bit more detail on the crash. Did anything show up in .xsession-errors / stderr ? Were you able to launch it in a clean profile? Were the crashes visible in about:crashes for the profile when launched in 64 bit? I suppose it doesn't matter too much at this point...
Did you attach the debugger and see what it was crashing on?
From when I used to work on performance and reliability at Mozilla, these types of user specific crashes were often caused by faulty system library or anti-virus like software doing unstable injections/hooks. Any kind of frequent crashes were also easier to reproduce and as a result fix.
I understand this might seem unlikely given that it's working fine as 64-bit, but that crash dump makes me want to suggest running a memory tester. It has "Possible bit flips max confidence" of 25%. Ignore the exact percentage, I don't think it means much, but nonzero is atypical and concerning. (Definitely not a proof of anything!)
"It would crash in random spots" is another piece of evidence. Some legitimate problems really do show up in a variety of ways, but it's way more common for "real" problems to show up in recognizably similar ways.
And having a cluster in graphics stuff can sometimes mean it's triggered by heat or high bus traffic.
I'll admit I'm biased: I work at Mozilla, and I have looked at quite a few bugs now where there were good reasons on the reporter's side as to why it couldn't possibly be bad RAM, and yet the symptoms made no sense, and it ended up being bad RAM. Our hardware is surprisingly untrustworthy these days. But I also happen to be working on things where bad RAM problems are likely to show up, so my priors are not your priors.
OpenBSD I386 user there, atom n270. Anyone who says it's useless...
Slashem, cdda:bn, mednafen, Bitlbee, catgirl, maxima+gnuplot, ecl with common lisp, offpunk, mutt, aria2c, mbsync, nchat, MPV+yt-dip+streamlink, tut, dillo, mupdf, telescope plus gemini://gemini.dev and gopher://magical.fish ... work uber fast. And luakit does okish with a single tab.
Seems sensible. Only 2.6% of users (with telemetry enabled) are using 32-bit Windows while 6.4% are using 32-bit Firefox on 64-bit Windows[0]. 32-bit Linux might see more use however isn't included in the stats, only 5.3% of users are running Linux[1] and I doubt many enable telemetry.
Maybe they could also drop support for older x86_64 CPU's, releasing more optimised builds. Most Linux distributions are increasing their baseline to x84-64-v2 or higher, most Firefox users (>90%)[0] seem to meet at least x84-64-v2 requirements.
[0]: https://firefoxgraphics.github.io/telemetry/#view=system
[1]: https://firefoxgraphics.github.io/telemetry/#view=general
That seems like a lot of people to abandon! Perhaps the right financial decision, I don't know, but that seems like a significant number of users.
They aren't ending support for 32-bit Windows. If the ratio of 32/64 bit users on Linux matched those on Windows, then this would affect 0.5% of their users.
Does this mean that 32-bit Linux users will be able to run more up-to-date versions using Wine?
Abandon is too strong a word. I imagine most people who are still using 32 bit operating systems aren't too concerned about getting the very latest version of firefox either.
They might not be concerned, but websites using new standards will slowly start to break for them.
Polyfills are standard for websites to be compatible with older browsers.
It will all break in time.
These things that look like institutions, that look like bricks carved from granite, are just spinning plates that have been spinning for a few years.
When I fight glibc dependency hell across Ubuntu 22 and Ubuntu 24, I sympathize with Firefox choosing to spin the 64-bit plates and not the 32-bit plates.
If I were a product decision maker, I’d be ok with that. It’d have to be a very unusual niche to make it worth the engineering effort to support customers who only run decades-old hardware.
Employees: “We want to use new feature X.”
Boss: “Sorry, but that isn’t available for our wealthy customers who are stuck on Eee PCs.”
Nah.
within those numbers are people who don't really have a preference one way or another, and just didn't bother to upgrade. I have to imagine that the group of people that must use 32-bit and need modern features is vanishingly small.
I would bet a lot of those folks are running embedded linux environments. Kiosks, industrial control, signage, NUCs, etc. I know that as of about 6 years ago I was still working with brand-new 32-bit celeron systems for embedded applications. (Though those CPUs were EOL'd and we were transitioning to 64-bit)
6 years ago was 2019. You were working in 2019 with "brand new 32-bit-only Celerons" which had no 64 bit support?!
Nah mate, something doesn't add up. I can't buy this. Even the cheapest Atoms had 64bit support much earlier than that and Atoms were lower tier silicone than Celeron so you can't tell me Intel had brand new 32 bit only Celerons in 2019.
My Google-fu found the last 32-bit only chips intel shipped were the Intel Quark embedded SoCs EoL in 2015. So what you're saying doesn't pass the smell test.
May have been 2018. Definitely not that long before covid. Suppliers in the embedded space will stockpile EOL parts for desperate integrators such as ourselves, and can continue to supply new units for years after they're discontinued. The product needed a custom linux kernel compile and it took a while to get that working on 64-bit and we had to ship new units. Yes the COGS get ridiculous.
Sure, but in that case it probably wasn't a Celeron, and there's industrial players still keeping 386 systems alive for one reason or another, but it feels in bad faith to call it "brand new" when it's actually "~10 year old, new old stock". Do you know what I mean?
I don't understand what the distinction/problem is. It's a new-in-box a la "brand new". You're really getting tripped up over semantics?
My point is this stuff is still in play in a lot of places.
Maybe it's the language barrier since I'm not a native English speaker but where I'm from the phrase "brand new" means something different, it means something that just came onto the market very recently, not something that came on the market 10+ years ago but was never opened from the packaging. That's no longer means "brand new", it means "old but never used/opened". Very different things.
So when you tell me "brand new 32 bit Celeron" it is understood as "just came onto the market".
Am I right or wrong with this understanding?
>My point is this stuff is still in play in a lot of places.
I spent ~15 years in embedded and can't concur on the "still in play in a lot of places" part, but I'm not denying some users can't still exists out there, however I'm sure we can probably count them on very few fingers since Intel's 32 bit Embedded chips never had much traction to begin with.
I've never understood 'brand new' to imply anything about freshness. But according to Mirriam-Webster it means both in different, but very similar, contexts.
https://www.merriam-webster.com/dictionary/brand-new
The distinction in English might be more in "new" versus "used". And yes, that is inconsistent, you would think "new" versus "old" and "used" versus "unused". But alas :)
The term in this case is "new old stock:"
As in, a product that was manufactured, kept in its original packaging, and "unopened and unused".
(Although there's some allowances for the vendor to test because you don't want to buy something DOA.)
(Although I won't get too angry for someone saying "brand new." "New old stock" is kind of an obscure term that you don't come across unless you're the kind of person who cares about that kind of thing.)
I think that’s the right way to look at it. If you want a 32 bit system to play with as a hobby, you know you’re going to bump into roadblocks. And if you’re using a 20 year old system for non-hobby stuff, you already know there’s a lot of things that don’t work anymore.
>And if you’re using a 20 year old system for non-hobby stuff, you already know there’s a lot of things that don’t work anymore.
Mate, 20 year old system means a Pentium 4 Prescott and Athlon 64, both of which had 64 bit support. And in another year after we already had dual core 64 bit CPUs.
So if you're stuck on 32 bit CPUs then your system is even older than 20 years.
That was a transitional time. Intel Core CPUs launched as 32 bit in 2006, and the first Intel Macs around then used them. OS X Lion dropped support for 32 bit Intel in 2011.
So you could very well have bought a decent quality 32 bit system after 2005, although the writing was on the wall long before then.
Maybe more relevantly, the first Atom CPUs were 32-bit only and were used in the popular netbooks (eeepc etc) during 2008-2010ish era.
>So you could very well have bought a decent quality 32 bit system after 2005, although the writing was on the wall long before then.
Not really. With the launch of Athlon 64, AMD basically replaced all their 32bit CPUs lineups with that new arch, and not kept them along much longer as a lower tier part. By 2005 I expect 90% of new PCs sold were already 64 bit ready.
> By 2005 I expect 90% of new PCs sold were already 64 bit ready.
You're several years off:
"The FIRST processor to implement Intel 64 was the multi-socket processor Xeon code-named Nocona in June 2004. In contrast, the initial Prescott chips (February 2004) did not enable this feature."
"The first Intel mobile processor implementing Intel 64 is the Merom version of the Core 2 processor, which was released on July 27, 2006. None of Intel's earlier notebook CPUs (Core Duo, Pentium M, Celeron M, Mobile Pentium 4) implement Intel 64."
https://en.wikipedia.org/wiki/X86-64#Intel_64
"2012: Intel themselves are limiting the functionality of the Cedar-Trail Atom CPUs to 32bit only"
https://forums.tomshardware.com/threads/no-emt64-on-intel-at...
Intel had 80% of the CPU market at the time.
Intel didn’t do that by 2005, though. MacBooks weren’t the single most popular product line, but they weren’t exactly eMachines.
Macbook market share was irrelevant around 2005 for that to matter in the PC CPU statistics.
"Modern features" are one thing; "security updates" are another. According to the blog post, security updates are guaranteed for 1 year.
Its an actual migration to a new platform more than just not bothering to upgrade though.
some people use older tech, precisely because it is physically incapable of facilitating some inpalatable tech that they dont require.
Mozilla is in extremely dire straits right now, so unless this "lot of people" make a concerted donation effort to keep the lights on I would be hardly shocked by the sunsetting.
Dire straights? They had $826.6M in revenue in 2024.
They will be in dire straights if the Google money goes away for some reason, but right now they have plenty of money.
(not that I think it makes any sense for them to maintain support for 32-bit cpus)
> Mozilla is in extremely dire straits right now, so unless this "lot of people" make a concerted donation effort
Last i checked, Mozilla was an ad company with Google as the main "donor".
I’d have to agree. I doubt there are that many (in relative terms) people browsing the web on 32-bit CPUs and expecting modern experiences. I’ve gotta imagine it would be pretty miserable, what with the inherent RAM limitations on those older systems, and I’m sure JavaScript engines aren’t setting speed records on Pentium 4s.
Yeah consumer CPUs have been 64-bit since what, the PowerPC G5 (2003), Athlon 64 (2003), and Core 2 (2006)? There were a few 32-bit x86 CPUs that were released afterward, but they were things like Atoms which were quite weak even for the time and would be practically useless on the modern internet.
More generally I feel that Core 2 serves as a pretty good line in the sand across the board. It’s not too hard to make machines of that vintage useful, but becomes progressively challenging with anything older.
For what it's worth, people may have been running 64-bit CPUs, but many were still on 32-bit OSes. I was on 32-bit XP until I upgraded to 64-bit Win7.
I have to imagine that group to be pretty small by now, though. Most PCs with specs good enough to still be useful now run W7 64-bit about as well as they do XP SP3 (an old C2D box of mine feels great under 7, as an anecdote), and for those who’ve been running Linux on these boxes there’s not really much reason to go for a 32-bit build over a 64-bit one.
I mentioned elswewhere, but Apple started selling 32 bit Core (not Core 2) MacBook Pros in 2006. Those seemed dated even at the time. I’d call them basically the last of the 32 bit premium computers from a major vendor.
Frankly, anything older than that sucks so much power per unit of work that I wouldn’t want to use them for anything other than a space heater.
Intel Prescott, so like 2004.
32 bit Atom netbook. I use offpunk, gopher://magical.fish among tons of services (and HN work straight there) and gemini://gemi.dev over the gemini protocol with Telescope as the client. Mpv+yt-dlp+streamlink complement the video support. Miserable?
Go try browsing the web without UBlock Origin today under an i3.
I haven’t tried it, but as bloated as the web is, I don’t think it’s so bad that you need gigabytes of memory or a blazing fast CPU to e.g. read a news website.
As long as you don’t open a million tabs and aren’t expecting to edit complex Figma projects, I’d expect browsing the web with a Pentium + a lightweight distro to be mostly fine.
Idk, I think this is sad. Reviving old hardware has long been one thing Linux is really great at.
Try it and come back to let us know. The modern web is incredibly heavy. Videos everywhere, tons of JavaScript, etc.
My wife had an HP Stream thing with an Intel N3060 CPU and 4GB of RAM. I warned her but it was cheap enough it almost got the job done.
Gmail's web interface would take almost a minute to load. It uses about 500MB of RAM by itself running Chrome.
Does browsing the web include checking your email? Not if you need web mail, apparently.
Check out the memory usage for yourself one of these days on the things you use daily. Could you still do them?
Huh. The reason I'm surprised is that I'm able to comfortably browse the web in virtual machines with only 1 cpu core and 2 gb of memory, even when the VM is simultaneously compiling software. This is only opening 1-2 tabs at a time, mind you.
A lightweight Linux desktop still works fine for very basic tasks with 4GB RAM, and that's without even setting up compressed RAM swap. The older 2GB netbooks and tablets might be at the end of the road, though.
I don’t think web browsing is a very basic task anymore, though. A substantial portion of new sites are React SPAs with a lot of client processing demands.
The other issue AIUI and a more pressing one, is that the browser itself is getting a lot heavier wrt. RAM usage, even when simply showing a blank page. That's what ends up being limiting for very-low-RAM setups.
Chromium has a --light switch. And, for the rest, git-clone gopher://bitreich.org/privacy-haters, look up the Unix script for Chrome variables and copy the --arguments stuff for the Chromium launcher under Wndows. Add UBock origin too.
4GB of RAM should be more than enough.
I have a ThinkPad X1 Gen3 Tablet (20KK) here for my Windows needs, my daily driver is a M2 MBA, and my work machine is a 2019 16-inch MBP (although admitted, that beast got an i9...).
Got the Thinkpad for half the ebay value on a hamfest. Made in 2018-ish, i5-8350U CPU... It's a nice thing, the form factor is awesome and so is the built-in LTE modem. The problem is, more than a dozen Chrome tabs and it slows to a crawl. Even the prior work machine, a 2015 MBP, performed better.
And yes you absolutely need a beefy CPU for a news site. Just look at Süddeutsche Zeitung, a reputable newspaper. 178 requests, 1.9 MB, 33 seconds load time. And almost all of that crap is some sort of advertising - and that despite me being an actually subscribed customer with adblock enabled on top of that.
- Install chromium
- Ublock origin instead of AdBlock
- git clone git://bitreich.org/privacy-haters
Altough this is HN; so I would just suggest disabling JS under the UBo settings and enabling the advanced settings under Ubo. Now, click on the UBo origin and mark the 3rd party scripts and such in red; and leave out the 1st party images/requests and enabled. Then, starting accepting newspapers' domains and CDN's until it works. The CPU usage will plummet down.> Maybe they could also drop support for older x86_64 CPU's, releasing more optimised builds
Question: Don't optimizers support multiple ISA versions, similar to web polyfill, and run the appropriate instructions at runtime? I suppose the runtime checks have some cost. At least I don't think I've ever run anything that errored out due to specific missing instructions.
There was recent story about f-droid running ancient x86-64 build servers and having issues due lacking isa extensions
https://news.ycombinator.com/item?id=44884709
but generally it is rare to see higher than x86-64-v3 as a requirement, and that works with almost all CPUs sold in the past 10+ years (Atoms being prominent exception).
A CMPXCHG16B instruction is going to be faster than a function call; and if the function is inlined there's still binary size cost.
The last processor without the CMPXCHG16B instruction was released in 2006 so far as I can tell. Windows 8.1 64-bit had a hard requirement on the CMPXCHG16B instruction, and that was released in 2013 (and is no longer supported as of 2023). At minimum Firefox should be building with -mcx16 for the Windows builds - it's a hard requirement for the underlying operating system anyway.
Let me play devil's advocate: for some reason, functions such as strcpy in glibc have multiple runtime implementations and are selected by the dynamic linker at load time.
And there's a performance cost to that. If there was only one implementation of strcpy and it was the version that happens to be picked on my particular computer, and that implementation was in a header so that it could be inlined by my compiler, my programs would execute faster. The downside would be that my compiled program would only work on CPUs with the relevant instructions.
You could also have only one implementation of strcpy and use no exotic instructions. That would also be faster for small inputs, for the same reasons.
Having multiple implementations of strcpy selected at runtime optimizes for a combination of binary portability between different CPUs and for performance on long input, at the cost of performance for short inputs. Maybe this makes sense for strcpy, but it doesn't make sense for all functions.
> my programs would execute faster
You can't really state this with any degree of certainty when talking about whole-program optimization and function inlining. Even with LTO today you're talking 2-3% overall improvement in execution time, without getting into the tradeoffs.
Typically, making it possible for the compiler to decide whether or not to inline a function is going to make code faster compared to disallowing inlining. Especially for functions like strcpy which have a fairly small function body and therefore may be good inlining targets. You're right that there could be cases where the inliner gets it wrong. Or even cases where the inliner got it right but inlining ended up shifting around some other parts of the executable which happened to cause a slow-down. But inliners are good enough that, in aggregate, they will increase performance rather than hurt it.
> Even with LTO today you're talking 2-3% overall improvement in execution time
Is this comparing inlining vs no inlining or LTO vs no LTO?
In any case, I didn't mean to imply that the difference is large. We're literally talking about a couple clock cycles at most per call to strcpy.
What I was trying to point out is that you're essentially talking about LTO. Getting into the weeds, the compiler _can't_ optimize strcpy(*) in practice because its not going to be defined in a header-only library, it's going to be in a different translation unit that gets either dynamically or statically linked. The only way to optimize the function call is with LTO - and in practice, LTO only accounts for 2-3% of performance improvements.
And at runtime, there is no meaningful difference between strcpy being linked at runtime or ahead of time. libc symbols get loaded first by the loader and after relocation the instruction sequence is identical to the statically linked binary. There is a tiny difference in startup time but it's negligible.
Essentially the C compilation and linkage model makes it impossible for functions like strcpy to be optimized beyond the point of a function call. The compiler often has exceptions for hot stdlib functions (like memcpy, strcpy, and friends) where it will emit an optimized sequence for the target but this is the exception that proves the rule. In practice, the benefits of statically linking in dependencies (like you're talking about) does not have a meaningful performance benefit in my experience.
(*) strcpy is weird, like many libc functions its accessible via __builtin_strcpy in gcc which may (but probably won't) emit a different sequence of instructions than the call to libc. I say "probably" because there are semantics undefined by the C standard that the compiler cannot reason about but the linker must support, like preloads and injection. In these cases symbols cannot be inlined, because it would break the ability of someone to inject a replacement for the symbol at runtime.
> What I was trying to point out is that you're essentially talking about LTO. Getting into the weeds, the compiler _can't_ optimize strcpy(*) in practice because its not going to be defined in a header-only library, it's going to be in a different translation unit that gets either dynamically or statically linked.
Repeating the part of my post that you took issue with:
> If there was only one implementation of strcpy and it was the version that happens to be picked on my particular computer, and that implementation was in a header so that it could be inlined by my compiler, my programs would execute faster.
So no, I'm not talking about LTO. I'm talking about a hypothetical alternate reality where strcpy is in a glibc header so that the compiler can inline it.
There are reasons why strcpy can't be in a header, and the primary technical one is that glibc wants the linker to pick between many different implementations of strcpy based on processor capabilities. I'm discussing the loss of inlining as a cost of having many different implementations picked at dynamic link time.
Afaik runtime linkers can't convert a function call into a single non-call instruction.
Linux kernel has an interesting optimization using the ALTERNATIVE macro, where you can directly specify one of two instructions and it will be patched at runtime depending on cpu flags. No function calls needed (although you can have a function call as one of the instructions). It's a bit more messy in userspace where you have to respect platform page flags, etc. but it should be possible.
They could always just make the updater/installer install a version optimized for the CPU its going to be installed on.
It's not that uncommon to run one system on multiple CPUs. People swap out the CPU in their desktops, people move a drive from one laptop to another, people make bootable USB sticks, people set up a system in a chroot on a host machine and then flash a target machine with the resulting image.
Detect that on launch and use the updater to reinstall.
[dead]
As far as I can tell, GCC supports compiling multiple versions of a function, but can't automatically decide which functions to do that for, or how many versions to build targeting different instruction set extensions. The programmer needs to explicitly annotate each function, meaning it's not practical to do this for anything other than obvious hot spots.
You can do that to some limited degree, but not really.
There are more relevant modern examples, but one example that I really think illustrates the issue well is floating point instructions. The x87 instruction set is the first set of floating point instructions for x86 processors, first introduced in the late 80s. In the late 90s/early 2000s, Intel released CPUs with the new SSE and SSE2 extensions, with a new approach to floating point (x87 was really designed for use with a separate floating point coprocessor, with a design that's unfortunate now that CPUs have native floating point support).
So modern compilers generate SSE instructions rather than the (now considered obsolete) x87 instructions when working with floating point. Trying to run a program compiled with a modern compiler on a CPU without SSE support will just crash with an illegal instruction exception.
There are two main ways we could imagine supporting x87-only CPUs while using SSE instructions on CPUs with SSE:
Every time the compiler wants to generate a floating point instruction (or sequence of floating point instructions), it could generate the x87 instruction(s), the SSE instruction(s), and a conditional branch to the right place based on SSE support. This would tank performance. Any performance saving you get from using an SSE instruction instead of an x87 instruction is probably going to be outweighed by the branch.
The other option is: you could generate one x87 version and one SSE version of every function which uses floats, and let the dynamic linker sort out function calls and pick the x87 version on old CPUs and the SSE version on new CPUs. This would more or less leave performance unaffected, but it would, in the worst case, almost double your code size (since you may end up with two versions of almost every function). And in fact, it's worse: the original SSE only supports 32-bit floats, while SSE2 supports 64-bit floats; so you want one version of every function which uses x87 for everything (for the really old CPUs), one version of every function which uses x87 for 64-bit floats and SSE for 32-bit floats, and you want one function which uses SSE and SSE2 for all floats. Oh, and SSE3 added some useful functions; so you want a fourth version of some functions where you can use instructions from SSE3, and use a slower fallback on systems without SSE3. Suddenly you're generating 4 versions of most functions. And this is only from SSE, without considering other axes along which CPUs differ.
You have to actively make a choice here about what to support. It doesn't make a sense to ship every possible permutation of every function, you'd end up with massive executables. You typically assume a baseline instruction set from some time in the past 20 years, so you're typically gonna let your compiler go wild with SSE/SSE2/SSE3/SSE4 instructions and let your program crash on the i486. For specific functions which get a particularly large speed-up from using something more exotic (say, AVX512), you can manually include one exotic version and one fallback version of that function.
But this causes the problem that most of your program is gonna get compiled against some baseline, and the more constrained that baseline is, the more CPUs you're gonna support, but the slower it's gonna run (though we're usually talking single-digit percents faster, not orders of magnitude faster).
I consider it unlikely, but perhaps there's some instructions that don't have a practical polyfill for x86?
The only thing that comes to mind is some form of atomic instructions that need to interact with other code in well defined ways. I don't see how you could polyfill cmpxchg16b for example.
> 32-bit Linux might see more use
Probably less, not more. Many distros either stopped supporting 32bit systems, or are planning to. As the announcement says, that's why they're stopping support now.
Less than 2.6% of browser users (with telemetry enabled) are using Firefox. Should the web drop support for Firefox? Seems sensible. (I would hope not)
Firefox shouldn't need special support by the web, the same relationship can't be said of architecture specific binaries.
It'd be ~0.1% of Firefox users that use 32-bit Linux, extrapolating from e2le's statistics, not 2.6%. Have to draw the line at some point if an old platform is becoming increasingly difficult to maintain - websites today aren't generally still expected to work in IE6.
[dead]
I believe even Raspberry Pi4B and 400 are still only having first-class drivers for 32-bit?
Kiosks and desktops and whatnot on Raspis still on 32-bit and likely to have Firefox without telemetry.
They edited the article to clarify that they're only dropping support for 32-bit x86.
Good to see
What's the threshold for minority to be ignored?
That's surprising. Why is there such a comparatively large number using 32-bit Firefox on 64-bit Windows?
Some people are under the misapprehension that 32-bit programs need less ram, which might explain that, but that's still a large number regardless.
If they are like me, they simply never realized they needed to re-install Firefox after upgrading the OS.
Mozilla should try to automate this switch where the system is compatible to it.
Why is it reasonable? I understand that it would be financially reasonable for a commercial endeavour, but isn't Firefox more like an open source project?
Debian supports MIPS and SPARC still. Last I checked OpenSSL is kept buildable on OpenVMS. Surely there must be a handful of people out there who cares about good old x86?
If your numbers are correct, there are millions if not tens of millions of Firefox users on 32-bit. If none of them are willing to keep Firefox buildable, there must be something more to it.
Debian has stopped supporting x86 32bits recently, Chrome did so 9 years or so ago.
We've carefully ran some numbers before doing this, and this affects a few hundreds to a few thousand people (hard to say, ballpark), and most of those people are on 64bits CPUs, but are using a 32bits Firefox or 32bits userspace.
The comparatively high ratio of 32bits users on Windows is not naively applicable to the Linux Desktop population, that has migrated ages ago.
To expand a bit on that, the i386 support that was recently deprecated to "partially supported" in Debian refers to project status. Unsupported architecture can not be considered blockers, for example. The packages are still being built, are published on the mirrors, and will be for the forseeable future as long as enough people care to keep it alive.
That's the specific meaning of support that it was my intention to point out. Free software projects usually do not "support" software in the commercial sense, but consider platforms supported when there are enough persons to keep the build alive and up to date with the changing build requirements etc. It was my expectation that Firefox was more like free software project than a commercial product, but perhaps that is not the case?
Commercial products have to care about not spreading their resources thin, but for open source cause and effect are the other way around: The resources available is usually the incoming paramter that decides what is possible to support. Hence my surprise that not enough people are willing to support a platform that has thousands of users and isn't particularly exotic, especially compared to what mainstream distributions like Debian already build.
GNUinos and any Devuan based distro still supports 32 bits.
An open source project should also use its resources effectively. There is always the possibility for a community fork.
The last release to support 32-bit x86 hardware for popular distros was:
By the time FireFox 32-bit is dropped, all the versioned distros will be past their general support date and into extended support, leaving Gentoo, Arch32, and a handful of smaller distros. Of course, there are also folks running a 64-bit kernel with 32-bit Firefox to save memory.Seems reasonable by Mozilla, to me, given precedents like the new Debian release not doing 32-bit release builds.
And doing security updates on ESR for a year is decent. (Though people using non-ESR stream builds of Firefox will much sooner have to downgrade to ESR, or be running with known vulnerabilities.)
If it turns out there's a significant number of people who really want Firefox on 32-bit x86, would it be viable for non-Mozilla volunteers to fork the current ESR or main stream, do bugfixes, backport security fixes, and distribute that unofficial or rebranded build?
What about volunteers trying to keep the main stream development backported? Or is that likely to become prohibitively hard at some point? (And if likely to become too hard, is it better to use that as a baseline going forward with maintenance, or to use the ESR as that baseline?)
>[Updated on 2025-09-09 to clarify the affected builds are 32-bit x86]
That's nice... When this was originally posted on 09-05 it just mentioned "32-bit support", so I'd been worried this would be the end of me using FF on a Microsoft Surface RT (armv7, running Linux).
Does this mean they are deleting a bunch of code, or just that people will have to compile it manually? I'd imagine there is a lot of 32-bit specific code, but how much of that is 32-bit-Linux specific code?
I'm honestly surprised just about anything supports 32-bit these days.
It's fine to keep hosting the older versions for download, and pointing users to it if they need it. But other than that, I see 0 reason to be putting in literally any effort at all to support 32-bit. It's ancient and people moved on like what, well over a decade and a half ago?
If I were in charge I'd have dropped active development for it probably 10 years ago.
That's what Google did with Chrome.
Are you sure about that? There are Android Go targets that are 32-bit to reduce memory usage, even on 64bits CPUs.
What is the memory usage difference like?
Here's an old comparison that shows a surprisingly big difference with 10 tabs: https://www.ghacks.net/2016/01/03/32-bit-vs-64-bit-browsers-...
32 bit Firefox doesn't work anyway. I had an old 32 bit Firefox and didn't change it when I switched to 64 bit installation (I didn't realize).
It crashed NON-STOP. And it would not remember my profile when I shut down, which made the crashes even worse, since I lost anything I was working on.
I finally figured out the problem, switched to 64 bit and it was like magic: Firefox actually worked again.
I don't know if you can draw any good conclusions from that on a linux install. Seems just as likely that after your 64 bit install switch, 32 bit libraries it was depending on were missing.
Linux distros don't tend to be as obsessive at maintaining full 32 bit support as the Windows world.
A better test would be to fire up a 32 bit VM and see if Firefox 32 bit crashed there...
If they made it as far as being able to lose something they were working on, then it's less likely to have been a missing library problem. But I don't know what it was; people do successfully run Firefox on 32-bit Linux.
Firefox does have problems with old profiles, though. I could easily see crud building up there. I don't think Firefox is very good about clearing it out (unless you do the refresh profile thing). You could maybe diagnose it with about:memory, if you were still running that configuration.
I tried about:memory, I did not find anything useful. See also my reply here: https://news.ycombinator.com/item?id=45173661
(I'm no longer running it, I switched to 64 bit and was VERY happy to no longer have crashes.)
I technically could re-install the 32 bit one, and try it, but honestly, I don't really want to!
If the libraries were missing entirely, I'm not sure 32-bit Firefox would even start. But if they were present and nothing was keeping them updated (pretty likely on an otherwise 64-bit system), they'd pretty likely become out of date -- which could certainly explain spurious crashes.
Fair point, although Firefox also launches subprocesses, and I don't know if those use same libraries as the main process. And I also don't know if it dynamically loads supporting libs after launch.
No, this is Debian, it keeps the 32 bit libraries just as in-date as the 64 bit ones. It can handle having both at the same time.
It's Debian, it can handle 32 bit and 64 bit applications at the same time, and the package manager makes sure you have all the dependencies.
I didn't change libraries, it was a gradual switch where you convert applications to 64 bit - and I didn't think to do Firefox, but it wasn't missing 32 bit libraries.
It was simply a profile that I'd been continuously using since approx 2004, and it was probably too large to fit in 32 memory anymore, or maybe Firefox itself needed more memory and couldn't map it. (The system had a 64 bit kernel, so it wasn't low on RAM), but 32 bit apps are limited to 2/3/4GB.
Possible I suppose. You can restrict firefox memory usage in the config. Perhaps their dynamic allocation was getting confused by what was available on the 64 bit machine? Still. Why would any 32 bit app even try to allocate more than it actually could handle. I dunno. I'm still inclined to think missing libs (or out of date libs) - but hard to say without a bit more detail on the crash. Did anything show up in .xsession-errors / stderr ? Were you able to launch it in a clean profile? Were the crashes visible in about:crashes for the profile when launched in 64 bit? I suppose it doesn't matter too much at this point...
See my reply here: https://news.ycombinator.com/item?id=45173661
Did you attach the debugger and see what it was crashing on?
From when I used to work on performance and reliability at Mozilla, these types of user specific crashes were often caused by faulty system library or anti-virus like software doing unstable injections/hooks. Any kind of frequent crashes were also easier to reproduce and as a result fix.
Here is one of the crash reports: https://crash-stats.mozilla.org/report/index/15999dc2-9465-4...
(I happened to have an un-subitted one which I just submitted, all the other ones I submitted are older than 6 months and have been purged.)
It would crash in random spots, but usually some kind of X, GLX, EGL or similar library.
But I don't think it was GLX, etc, because it also didn't save my profile except very very rarely, which was actually a much worse problem!!
(This crash is from a long time ago, hence the old Firefox version.)
I understand this might seem unlikely given that it's working fine as 64-bit, but that crash dump makes me want to suggest running a memory tester. It has "Possible bit flips max confidence" of 25%. Ignore the exact percentage, I don't think it means much, but nonzero is atypical and concerning. (Definitely not a proof of anything!)
"It would crash in random spots" is another piece of evidence. Some legitimate problems really do show up in a variety of ways, but it's way more common for "real" problems to show up in recognizably similar ways.
And having a cluster in graphics stuff can sometimes mean it's triggered by heat or high bus traffic.
I'll admit I'm biased: I work at Mozilla, and I have looked at quite a few bugs now where there were good reasons on the reporter's side as to why it couldn't possibly be bad RAM, and yet the symptoms made no sense, and it ended up being bad RAM. Our hardware is surprisingly untrustworthy these days. But I also happen to be working on things where bad RAM problems are likely to show up, so my priors are not your priors.
Is that proprietary nvidia driver stuff in the stack trace?
Yes, without it YouTube plays at like 5fps. If it matters, that proprietary nvidia stuff is still there with the 64 bit which is rock solid.
Also, I tried disabling EGL in the 32 bit with no effect on the crashing.
OpenBSD I386 user there, atom n270. Anyone who says it's useless... Slashem, cdda:bn, mednafen, Bitlbee, catgirl, maxima+gnuplot, ecl with common lisp, offpunk, mutt, aria2c, mbsync, nchat, MPV+yt-dip+streamlink, tut, dillo, mupdf, telescope plus gemini://gemini.dev and gopher://magical.fish ... work uber fast. And luakit does okish with a single tab.
I think that validates Mozilla’s decision. “See, retro enthusiasts know enough to select alternatives. We’re not stranding them without computers.”
Same hardware but I'm using NetBSD instead since Debian dropped 32-bit support.