This is a funny project. Too many people taking this serious, it is more of a 'can it be done' because the physical interfaces suggest that it should be possible, and then to follow through on it to prove that yes it actually can be done. And tbh much better than I expected it to work. I can imagine that for GPU compute heavy and interconnect bandwidth constrained applications a combination like this might actually be useful. You essentially just added an ethernet port to a 5090, there must be some value in that.
it's extra funny to me because the Raspberry Pi SoC is basically a little CPU riding on a big GPU (well, the earlier ones were. Maybe the latest ones shift the balance of power a bit). In fact, to this day the GPU is still the one driving the boot sequence.
So plugging a RasPi into a 5090 is "just" swapping the horse for one 10,000x bigger (someone correct my ratio of the RasPi5 GPU to the RTX5090)
I'm not very familiar with this layer of things; what does it mean for a GPU to drive a boot sequence? Is there something massively parallel that is well suited for the GPU?
The Raspberry Pi contains a Videocore processor (I wrote the original instruction set coding and assembler and simulator for this processor).
This is a general purpose processor which includes 16 way SIMD instructions that can access data in a 64 by 64 byte register file as either rows or columns (and as either 8 or 16 or 32 bit data).
It also has superscalar instructions which access a separate set of 32-bit registers, but is tightly integrated with the SIMD instructions (like in ARM Neon cores or x86 AVX instructions).
This is what boots up originally.
Videocore was designed to be good at the actions needed for video codecs (e.g. motion estimation and DCTs).
I did write a 3d library that could render textured triangles using the SIMD instructions on this processor. This was enough to render simple graphics and I wrote a demo that rendered Tomb Raider levels, but only for a small frame resolution.
The main application was video codecs, so for the original Apple Video iPod I wrote the MPEG4 and h264 decoding software using the Videocore processor, which could run at around QVGA resolution.
However, in later versions of the chip we wanted more video and graphics performance. I designed the hardware to accelerate video, while another team (including Eben) wrote the hardware to accelerate 3d graphics.
So in Raspberry Pis, there is both a Videocore processor (which boots up and handles some tasks), and a separate GPU (which handles 3d graphics, but not booting up).
It is possible to write code that runs on the Videocore processor - on older Pis I accelerated some video decode sofware codecs by using both the GPU and the Videocore to offload bits of transform and deblocking and motion compensation, but on later Pis there is dedicated video decode hardware to do this instead.
Note that the ARMs on the later Pis are much faster and more capable than before, while the Videocore processor has not been developed, so there is not really much use for the Videocore anymore. However, the separate GPU has been developed more and is quite capable.
> what does it mean for a GPU to drive a boot sequence
It's a quirk of the broadcom chips that the rpi family uses; the GPU is the first bit of silicon to power up and do things. The GPU specifically is a bit unusual, but the general idea of "smaller thing does initial bring up, then powers up $main_cpu" is not unusual once $main_cpu is ~ powerful enough to run linux.
That’s interesting, particularly since as far as I can tell, nothing in userland really bothers to make use of its GPU. I would really like to understand why, since I have a whole bunch of Pi’s and it seems like their GPUs can’t be used for much of anything (not really much for transcoding nor for AI).
> their GPUs can’t be used for much of anything (not really much for transcoding nor for AI)
It's both funny and sad to me that we're at the point where someone would (perhaps even reasonably) describe using the GPU only for the "G" in its name as not "much of anything".
One (obscure) example I know of is the RTLSDR-Airband[1] project uses the GPU to do FFT computation on older, less powerful Pis, through the GPU_FFT library[2].
The Raspberry Pi GPU has one of the better open source GPU drivers as far as SBCs go. It's limited in performance but its definitely being used for rendering.
There is a Vulkan API, they can run some compute. At least the 4 and 5 can: https://github.com/jdonald/vulkan-compute-rpi . No idea if it's worth the bus latency though. I'd love to know the answer to that.
I'd also love to see the same done on the Zero 2, where the CPU is far less beefy and the trade-off might go a different way. It's an older generation of GPU though so the same code won't work.
i thought my post was already too long to include this, but to your point, you can run AI inference in this setup and the performance can be pretty good.
Yeah it is obviously actually useful for AI inference over which the pcie speed isn't particularly important and a single board computer gets you a small system.
Seeing real games and benchmarks makes it more than a party trick, even if the use cases are niche. The serious takes kind of miss that the point is to poke the stack and see where it breaks
I think the conclusion here is that Raspberry Pis are now too pricey (especially when factoring in the various required accessories) and rarely make sense for typical desktop use vs. x86 mini-PCs. They make even less sense compared to various used thin clients that can generally be found on eBay.
You're paying a premium for physical compatibility with a ton of niche accessories. Whether or not they make sense depends on how important those accessories are to your use case.
That and the prices never really came back down to earth after the chip shortage hikes.
> You're paying a premium for physical compatibility
No. There are a bunch of alternatives with some to full pin compatibility. Some being many times faster [1]. No new projects should use a new Raspberry Pi.
Your video rates the PI as 10 for support, 10 for ease of use and 7 for performance. Just the support and ease of use is enough. You're paying for a mature ecosystem where you know things work and you don't have to waste time struggling.
It would make a lousy desktop computer even if it was 10x as powerful.
- high current 5V USB power supply you probably don't have
- HDMI micro port you have like 1 cable for
- PCIe through very fragile ribbon cable + hodgepodge of
adapters
- more adapters needed for SSD
- no case, but needs ample airflow
- power input is on the side and sticks out
GPIO is the killer feature, but I'll be honest, 99% of the hardware hacking I do is with microcontrollers much cheaper than a Pi that provide a serial port over USB anyways (and the commonly-confused-for-a-full-pi Pi Pico is pretty great for this)
We had a problem trying to bring up a couple of Pi 5, hoping they'd represent something reproducable we could deploy on multiple sites as an isolation stage for remote firmware programming. Everything looked great, until we brought one somewhere and untethered it from ethernet, and we started getting bizarre hangs. Turned out the wifi was close enough to the PCIe ribbon cable that bursts of wifi broadcasts were enough to disrupt the signal to the SSD, and essentially unmount it (taking root with it). Luckily we were able to find better shielded cables, but it's not something we were expecting to have to deal with.
I dunno, I brought a pi 500+ with an SSD, 16GB RAM, little screen, PSU, mouse and cables. It was around £300.
It's not super powerful but my young kids use it to surf the net, play Minecraft, do art projects, etc. (we are yet to play with the gpio).
I don't get on with the keyboard but otherwise would make a decent development machine for me, considering my development starts with me ssh'ing into some remote VM and running vim.
The whole lot is tiny and extremely portable, we pack it away in a draw when not in use.
All in it felt like good value for money for something that took about 3 minutes to get up and running.
That's actually about the same price as the pi 500+ without the screen. Except that one has 500gb Vs 256gb SSD, but doesn't have the snazzy led keyboard.
I never really understood how GPIO is a killer feature with them. There are so many ways to get GPIO, from $5 USB dongles to any microcontroller/dev board that's ever exists. What's special about Raspberry Pi GPIO that I'm missing?
The only case I can think of is very heavy compute that relies on low latency GPIO related to that compute?
In general the best part of the Pi is that it's so stable as a platform. Everyone has the same hardware and generally the same OS so guides online just work - it may be one of the if not the best Linux on the desktop experience I've used personally.
Along with that the gpio is there and ample so it's extremely easy to just start using it.
I do argue an esp 8266 or esp32 are better for a development microcontroller but you have to muck with cabling it up before you can even load a program on it which is a few more extra steps than a Pi
The low latency is the reason why the PiStorm (Amiga CPU accelerator) project works so well on a Pi 2, 3 or 4. (Pi 5 is no longer suitable since the GPIO is now the other side of a PCI-E bus and thus suffers significantly higher latency than on previous models, despite being much faster in terms of throughput.)
For desktop use cases, sure. But the Pi's target market is makers and educators who want small and efficient and can interface easily with peripherals like cameras and GPIO. Desktop users and low-end home labbers are a distant second.
Small yes, efficient no. It's more or less on par with comparable minipcs in power draw, needs an active cooler and idles at over 3W. That's more than a Pi 2 uses going flat out. They keep increasing power draw with marginal efficiency gains and don't bother to do much power management. It's absolutely atrocious compared to the average smartphone SoC.
I really hope there's some kind of battery oriented low wattage high efficiency version planned someday, because we're up to requiring a 5A power supply and it's getting absurd.
I assume the “anti-cheat” he brings up in Doom: The Dark Ages is actually Denuvo, which likely would have some issues running, although a January 2025 post on Phoronix indicates maybe it does work, or Denuvo support is being worked on? [0]
Doom The Dark Ages is a single player game, so I’m not sure who you’d be cheating against, aside from maybe some real Buzz Killington’s saying you’re “cheating Microsoft by pirating it”.
Denuvo isn’t quite DRM either. It’s an anti-tamper layer; the whole goal being to prevent the binary from modifications. This then prevents the DRM of choice (ie Steamworks) from being bypassed.
I know that sounds a little pedantic; but typically DRM involves an identity layer (who is allowed to access what?). Denuvo doesn’t care about that; it’s even theoretically possible to make a Denuvo protected binary anyone could use.
> Cyberpunk barely hits 16 FPS average on the Pi 5.
This is a lot better than my memories of forcing a Pentium MMX 200 MHz PC with 32 MB SDRAM and an ATI All-in-Wonder Pro of running games from the early 2000s.
I'm pretty sure I completed Morrowind for the first time ever using both wine and a celeron. Likewise before that with VirtualPC (remember that?) on Mac OS (note the space!) and Age of Empires (not even Rise of Rome!).
Single-digit FPS can _absolutely_ be playable if you're a desperate enough ten-year-old...
When I played (original vanilla) WoW I remember getting 2-3 fps in 40 player raids. The cursor wasn't tied to the game's framerate though. So with the right UI layout made from addons I could still be a pretty effective healer. I don't even remember what the dungeons looked like, just a giant grid of health bars, buttons and threat-meter graphs.
This would have been on some kind of Pentium 4 with integrated graphics. Not my earliest PC, but the first one I played any games on more advanced than the Microsoft Entertainment Packs.
> When I played (original vanilla) WoW I remember getting 2-3 fps in 40 player raids.
I had to look at the ground and get the camera as close as possible to cross between the AH and the bank in IF. Otherwise I’d get about 0.1 fps and had to close the game, which meant waiting in line to get back. Those were the days.
> So with the right UI layout made from addons I could still be a pretty effective healer.
I got pretty good with the timings and could almost play without looking at the screen. But I was DD and it was vanilla so nobody cared if I sucked as long as I got far away with the bombs.
> I don't even remember what the dungeons looked like, just a giant grid of health bars, buttons and threat-meter graphs.
I was talking a couple of weeks ago with a mate who was MT at the time and told me he knew the feet and legs of all the bosses but never saw the animations or the faces before coming back with an alt a couple of years later. I was happy as a warlock, enjoying the scenery. With a refresh rate that gave me ample time to admire it before the next frame :D
I've only ever played Skyrim on a 2009 13" MacBook Pro in Wine. It took like 30min to load and ran at like 4fps. But I didn't play past the first area.
Wasn't AoE1 released for PPC Mac natively? AoE2 was probably the best Mac game ever.
That line triggered some deep memories of tweaking config files, dropping resolutions to something barely recognizable, and still calling it a win if the game technically ran
Whatever, Glide was amazing! So much so that Nvidia bought them.
I remember what a huge difference it was having a dedicated 3D card capable of fast 2D and 3D vs the software rasterizer. Yes, NovaLogic games ran better. Yes, you can play Doom at decent FPS. Yes, SpecOps ran at full monitor resolution. They had a LOT to brag about.
Glide is precisely what made me hate 3dfx and was glad they died.
As a developer, I'm sure Glide was great.
But as a kid that really wanted a 3dfx Voodoo card for Christmas so I could play all the sweet 3D games that only supported Glide, I was upset when my dad got me a Rendition Verite 2200. But I didn't want to seem ungrateful, so my frustration was pointed to 3dfx for releasing a proprietary API.
I was glad that Direct3D and OpenGL quickly surpassed Glide's popularity.
But yeah, then 3dfx failed to innovate. IIRC, they lagged behind in 32-bit color rendering support as well as letting themselves get caught with their pants down when NVIDIA released the GeForce and introduced hardware transform which allowed the GPU to be more than just a texturing engine. I think that was the nail in 3dfx's coffin.
lol, agreed. Today, Glide feels like a predecessor to OpenGL. At the time it was awesome but as soon as DirectX came around along with OpenGL it was over. 1999 was the beginning of the NVidia train.
Thanks for the laugh about your disappointment with your dad. I had a similar thing happen with mine when I asked for Doom and him being a Mac guy, he came back with Bungie’s Marathon. I was upset until I played Marathon… I then realized how wise my father was.
I had no idea any of this stuff worked well enough to actually run modern games. The FEX emulation layer. The eGPU. It's not how fast this stuff runs that impresses me, it's that it runs at all.
If the CPU is the bottleneck, an interesting metric would be how cheap of a GPU you can pair and still add value. I suspect you would have similar benchmarks with a 5060 as a 5090 in these tests.
For example, if you pair an N150 mini pc with a cheap AMD egpu (one of the laptop skus), you’ve made yourself tho equivalent of a gaming laptop in clamshell (with better cooling) on the cheap. A price vs fps curve, switching GPUs but keeping the mini pc as a constant, would be super interesting.
Way back when I was young and broke, I played through Half Life 2 and the episodes on a ThinkPad T420 using an ExpressCard/34 PCMCIA to PCI with a graphics card I borrowed and an old crappy PSU I pulled from a business Dell desktop.
Managed to complete the games with decent graphics and framerate at the time. It wasn't an ideal setup, but I didn't care. In fact, I thought it was a cool hack to play games at the time without forking out a lot of money to build a gaming PC.
Maybe there are probably better options now to game than attaching a dedicated GPU with whatever hardware you already have, but I can verify that external GPUs are really cool and useful (though a 5090 is definitely not needed). You also don't have to care about cooling the GPU, since it's "atmosphere" cooled (though headphones and/or ANC are a must).
I tried a similar setup when in college, albeit with a X230 and a 1050Ti, and it worked amazing... for a few minutes at a time, since it blue-screened often.
I never managed to figure out the issue. The BSOD was something about a gpu timeout. It worked perfectly at home but shat the bed at the dorm. I assume there was some nasty interference there.
I have extemely weird bug where on Windows, many games crash quickly. My laptop is Lenovo legion 7i pro w/rtx4080.
I tried a lot of things, inclusing full windows reinstall, driver rollback, cleaning from dust etc etc. Crash reason is listed as "other" Nvidia driver error code.
Bazzite using Proton it works flawlessly. God of war,KCD2 and others. I guess, it will be Linux gaming for me from now on.
I am still puzzled why this situation even can be. If you have ideas, be my guest.
Also, it doesn't seem like it would be all that much more expensive for these high end GPUs to start getting x86/64 SoCs with midrange specs baked in, and these AIO GPUs could be tailor made for standalone AI and gaming applications. If it's the equivalent of a $10 bit of gear in terms of cost, they could charge an additional $100 for the feature, with a SoC optimized for the specs of the GPU - get rid of the need for an eGPU altogether and stream from the onboard host?
I know this is just a joke. However, I think it's fairly obvious that the CPU would be the main bottleneck for every test. Still fun to measure things.
Before someone else points that out, you missed the opportunity to run Crysis and some schools of thought would consider any kind of gaming benchmark to be invalid due to its absence :)
Man, I miss that. Every PC build and benchmarking youtuber always went out of their way to keep that meme alive, even as it became increasingly irrelevant. Don't think I've seen anyone mention the phrase in years. :(
Best if you need the RAM and are willing to pay, but it doesn't make as much sense as a mini PC for the same purpose (cost-wise), if you're doing normal computing tasks.
I think the sweet spot for the Pi 5 is 4GB (cost vs functionality you can use it for). But if you're like me, you don't care about value quite as much as fun/exploration. And for that, the more RAM, the merrier...
Huh so a Pi 5 is basically a Core 2 Quad according to Geekbench 6, that's fun. It was part of the recommended specs for GTA 4 back in the day, it should run great.
One advantage of DXVK in this regard is that it fixes a lot of the native stutter present on the PC version of GTA IV: https://youtu.be/aUIhtXzdeZY?t=218
What are we supposed to do with this information? It would have been more meaningful if the author tried the GPU card with an old machine, rather than a Raspberry Pi
> What are we supposed to do with this information?
Nothing. It’s just fun.
> t would have been more meaningful if the author tried the GPU card with an old machine, rather than a Raspberry Pi
But then it would have been lame. Who cares? If your old machine is a x86 less than 10 years old it’s most likely faster than the Pi. But that’s not the point. The point is to pair a cheap fun computer with a humongous and expensive card and see if it works. Because it’s fun.
personally I was interested in the capabilities of the pci-e bus as I abuse it in every other computer I can get my hands on (the rpi4 really did not like that treatment).
to your point about 'meaningful' though, indeed the ole College Try to run Crysis on a Samsung NC-10 would be far more glorious! But I assure you this was very fun for me.
Idk you can buy a new Pi for cheap and they're all the same, old machines vary and are not always available. I'm certainly not going to do it but it's not uninteresting imo
This is a funny project. Too many people taking this serious, it is more of a 'can it be done' because the physical interfaces suggest that it should be possible, and then to follow through on it to prove that yes it actually can be done. And tbh much better than I expected it to work. I can imagine that for GPU compute heavy and interconnect bandwidth constrained applications a combination like this might actually be useful. You essentially just added an ethernet port to a 5090, there must be some value in that.
it's extra funny to me because the Raspberry Pi SoC is basically a little CPU riding on a big GPU (well, the earlier ones were. Maybe the latest ones shift the balance of power a bit). In fact, to this day the GPU is still the one driving the boot sequence.
So plugging a RasPi into a 5090 is "just" swapping the horse for one 10,000x bigger (someone correct my ratio of the RasPi5 GPU to the RTX5090)
I'm not very familiar with this layer of things; what does it mean for a GPU to drive a boot sequence? Is there something massively parallel that is well suited for the GPU?
The Raspberry Pi contains a Videocore processor (I wrote the original instruction set coding and assembler and simulator for this processor).
This is a general purpose processor which includes 16 way SIMD instructions that can access data in a 64 by 64 byte register file as either rows or columns (and as either 8 or 16 or 32 bit data).
It also has superscalar instructions which access a separate set of 32-bit registers, but is tightly integrated with the SIMD instructions (like in ARM Neon cores or x86 AVX instructions).
This is what boots up originally.
Videocore was designed to be good at the actions needed for video codecs (e.g. motion estimation and DCTs).
I did write a 3d library that could render textured triangles using the SIMD instructions on this processor. This was enough to render simple graphics and I wrote a demo that rendered Tomb Raider levels, but only for a small frame resolution.
The main application was video codecs, so for the original Apple Video iPod I wrote the MPEG4 and h264 decoding software using the Videocore processor, which could run at around QVGA resolution.
However, in later versions of the chip we wanted more video and graphics performance. I designed the hardware to accelerate video, while another team (including Eben) wrote the hardware to accelerate 3d graphics.
So in Raspberry Pis, there is both a Videocore processor (which boots up and handles some tasks), and a separate GPU (which handles 3d graphics, but not booting up).
It is possible to write code that runs on the Videocore processor - on older Pis I accelerated some video decode sofware codecs by using both the GPU and the Videocore to offload bits of transform and deblocking and motion compensation, but on later Pis there is dedicated video decode hardware to do this instead.
Note that the ARMs on the later Pis are much faster and more capable than before, while the Videocore processor has not been developed, so there is not really much use for the Videocore anymore. However, the separate GPU has been developed more and is quite capable.
You have the most interesting job!
Thank you, I've used your work quite a number of times now.
> what does it mean for a GPU to drive a boot sequence
It's a quirk of the broadcom chips that the rpi family uses; the GPU is the first bit of silicon to power up and do things. The GPU specifically is a bit unusual, but the general idea of "smaller thing does initial bring up, then powers up $main_cpu" is not unusual once $main_cpu is ~ powerful enough to run linux.
That’s interesting, particularly since as far as I can tell, nothing in userland really bothers to make use of its GPU. I would really like to understand why, since I have a whole bunch of Pi’s and it seems like their GPUs can’t be used for much of anything (not really much for transcoding nor for AI).
> their GPUs can’t be used for much of anything (not really much for transcoding nor for AI)
It's both funny and sad to me that we're at the point where someone would (perhaps even reasonably) describe using the GPU only for the "G" in its name as not "much of anything".
One (obscure) example I know of is the RTLSDR-Airband[1] project uses the GPU to do FFT computation on older, less powerful Pis, through the GPU_FFT library[2].
1: https://github.com/rtl-airband/RTLSDR-Airband
2: http://www.aholme.co.uk/GPU_FFT/Main.htm
The Raspberry Pi GPU has one of the better open source GPU drivers as far as SBCs go. It's limited in performance but its definitely being used for rendering.
There is a Vulkan API, they can run some compute. At least the 4 and 5 can: https://github.com/jdonald/vulkan-compute-rpi . No idea if it's worth the bus latency though. I'd love to know the answer to that.
I'd also love to see the same done on the Zero 2, where the CPU is far less beefy and the trade-off might go a different way. It's an older generation of GPU though so the same code won't work.
You can play Quake on 'em.
This experiment really does feel like a poetic inversion
i thought my post was already too long to include this, but to your point, you can run AI inference in this setup and the performance can be pretty good.
There are definitely some use cases where it works out, others where it doesn't; I spent a bit of time testing that side of things late last year: https://www.jeffgeerling.com/blog/2025/big-gpus-dont-need-bi...
a great post that definitely inspired this one. i link to it in the first paragraph of my blog post.
I feel like maybe by the end of this year someone with access to a bunch of RTX Pro 6000s will have them running on a Pi or RK3588 lol.
we can only hope
I appreciate you making the post not about AI.
Yeah it is obviously actually useful for AI inference over which the pcie speed isn't particularly important and a single board computer gets you a small system.
Seeing real games and benchmarks makes it more than a party trick, even if the use cases are niche. The serious takes kind of miss that the point is to poke the stack and see where it breaks
I think the conclusion here is that Raspberry Pis are now too pricey (especially when factoring in the various required accessories) and rarely make sense for typical desktop use vs. x86 mini-PCs. They make even less sense compared to various used thin clients that can generally be found on eBay.
You're paying a premium for physical compatibility with a ton of niche accessories. Whether or not they make sense depends on how important those accessories are to your use case.
That and the prices never really came back down to earth after the chip shortage hikes.
> You're paying a premium for physical compatibility
No. There are a bunch of alternatives with some to full pin compatibility. Some being many times faster [1]. No new projects should use a new Raspberry Pi.
[1] https://www.youtube.com/watch?v=2OQ5ascBuCw
Your video rates the PI as 10 for support, 10 for ease of use and 7 for performance. Just the support and ease of use is enough. You're paying for a mature ecosystem where you know things work and you don't have to waste time struggling.
Unless they want to keep going without needing to swap things out frequently and deal with the extremely poor support that most alternatives get.
> You're paying a premium for physical compatibility with a ton of niche accessories.
Maybe this is the new narrative, but it wasn't how the Pi was initially developed and marketed.
It's just a touch too expensive for the use cases many hobbiest have.
It would make a lousy desktop computer even if it was 10x as powerful.
- high current 5V USB power supply you probably don't have
- HDMI micro port you have like 1 cable for
- PCIe through very fragile ribbon cable + hodgepodge of adapters
- more adapters needed for SSD
- no case, but needs ample airflow
- power input is on the side and sticks out
GPIO is the killer feature, but I'll be honest, 99% of the hardware hacking I do is with microcontrollers much cheaper than a Pi that provide a serial port over USB anyways (and the commonly-confused-for-a-full-pi Pi Pico is pretty great for this)
> PCIe through very fragile ribbon cable
We had a problem trying to bring up a couple of Pi 5, hoping they'd represent something reproducable we could deploy on multiple sites as an isolation stage for remote firmware programming. Everything looked great, until we brought one somewhere and untethered it from ethernet, and we started getting bizarre hangs. Turned out the wifi was close enough to the PCIe ribbon cable that bursts of wifi broadcasts were enough to disrupt the signal to the SSD, and essentially unmount it (taking root with it). Luckily we were able to find better shielded cables, but it's not something we were expecting to have to deal with.
I dunno, I brought a pi 500+ with an SSD, 16GB RAM, little screen, PSU, mouse and cables. It was around £300.
It's not super powerful but my young kids use it to surf the net, play Minecraft, do art projects, etc. (we are yet to play with the gpio).
I don't get on with the keyboard but otherwise would make a decent development machine for me, considering my development starts with me ssh'ing into some remote VM and running vim.
The whole lot is tiny and extremely portable, we pack it away in a draw when not in use.
All in it felt like good value for money for something that took about 3 minutes to get up and running.
You can get much more powerful PCs for much less, e.g.:
https://www.amazon.co.uk/dp/B0CFPRDQY8/
That's actually about the same price as the pi 500+ without the screen. Except that one has 500gb Vs 256gb SSD, but doesn't have the snazzy led keyboard.
Processor comparison too
https://www.cpu-monkey.com/en/compare_cpu-raspberry_pi_5_b_b...
I never really understood how GPIO is a killer feature with them. There are so many ways to get GPIO, from $5 USB dongles to any microcontroller/dev board that's ever exists. What's special about Raspberry Pi GPIO that I'm missing?
The only case I can think of is very heavy compute that relies on low latency GPIO related to that compute?
In general the best part of the Pi is that it's so stable as a platform. Everyone has the same hardware and generally the same OS so guides online just work - it may be one of the if not the best Linux on the desktop experience I've used personally.
Along with that the gpio is there and ample so it's extremely easy to just start using it.
I do argue an esp 8266 or esp32 are better for a development microcontroller but you have to muck with cabling it up before you can even load a program on it which is a few more extra steps than a Pi
If it's a funky esp board, possibly. The esp8266 and esp32 boards I've used all have usb sockets for programming.
The low latency is the reason why the PiStorm (Amiga CPU accelerator) project works so well on a Pi 2, 3 or 4. (Pi 5 is no longer suitable since the GPIO is now the other side of a PCI-E bus and thus suffers significantly higher latency than on previous models, despite being much faster in terms of throughput.)
For desktop use cases, sure. But the Pi's target market is makers and educators who want small and efficient and can interface easily with peripherals like cameras and GPIO. Desktop users and low-end home labbers are a distant second.
Small yes, efficient no. It's more or less on par with comparable minipcs in power draw, needs an active cooler and idles at over 3W. That's more than a Pi 2 uses going flat out. They keep increasing power draw with marginal efficiency gains and don't bother to do much power management. It's absolutely atrocious compared to the average smartphone SoC.
I really hope there's some kind of battery oriented low wattage high efficiency version planned someday, because we're up to requiring a 5A power supply and it's getting absurd.
I think the Pi still makes sense when you actually want a Pi
> I think the conclusion here is that Raspberry Pis are now too pricey
This blog post shows a $2000 GPU attached to a slow SBC that costs less than 1/10th of the GPU.
It’s interesting. It’s entertaining. It’s a fun read. But it’s not a serious setup that anyone considers optimal.
You think an RTX 5090 only costs $2000??
I assume the “anti-cheat” he brings up in Doom: The Dark Ages is actually Denuvo, which likely would have some issues running, although a January 2025 post on Phoronix indicates maybe it does work, or Denuvo support is being worked on? [0]
Doom The Dark Ages is a single player game, so I’m not sure who you’d be cheating against, aside from maybe some real Buzz Killington’s saying you’re “cheating Microsoft by pirating it”.
[0] https://www.phoronix.com/news/FEX-Emulator-2501
Denuvo is DRM, not anticheat.
Right, I’m pointing out that as a purely single player game, how would one cheat in Doom: The Dark Ages?
Denuvo isn’t quite DRM either. It’s an anti-tamper layer; the whole goal being to prevent the binary from modifications. This then prevents the DRM of choice (ie Steamworks) from being bypassed.
I know that sounds a little pedantic; but typically DRM involves an identity layer (who is allowed to access what?). Denuvo doesn’t care about that; it’s even theoretically possible to make a Denuvo protected binary anyone could use.
Denuvo is malware, not anticheat.
Most anti-cheat solutions these days are literal rootkits. So that distinction is pretty blurry lately.
Malware is always a matter of perspective.
> Cyberpunk barely hits 16 FPS average on the Pi 5.
This is a lot better than my memories of forcing a Pentium MMX 200 MHz PC with 32 MB SDRAM and an ATI All-in-Wonder Pro of running games from the early 2000s.
I'm pretty sure I completed Morrowind for the first time ever using both wine and a celeron. Likewise before that with VirtualPC (remember that?) on Mac OS (note the space!) and Age of Empires (not even Rise of Rome!).
Single-digit FPS can _absolutely_ be playable if you're a desperate enough ten-year-old...
When I played (original vanilla) WoW I remember getting 2-3 fps in 40 player raids. The cursor wasn't tied to the game's framerate though. So with the right UI layout made from addons I could still be a pretty effective healer. I don't even remember what the dungeons looked like, just a giant grid of health bars, buttons and threat-meter graphs.
This would have been on some kind of Pentium 4 with integrated graphics. Not my earliest PC, but the first one I played any games on more advanced than the Microsoft Entertainment Packs.
> When I played (original vanilla) WoW I remember getting 2-3 fps in 40 player raids.
I had to look at the ground and get the camera as close as possible to cross between the AH and the bank in IF. Otherwise I’d get about 0.1 fps and had to close the game, which meant waiting in line to get back. Those were the days.
> So with the right UI layout made from addons I could still be a pretty effective healer.
I got pretty good with the timings and could almost play without looking at the screen. But I was DD and it was vanilla so nobody cared if I sucked as long as I got far away with the bombs.
> I don't even remember what the dungeons looked like, just a giant grid of health bars, buttons and threat-meter graphs.
I was talking a couple of weeks ago with a mate who was MT at the time and told me he knew the feet and legs of all the bosses but never saw the animations or the faces before coming back with an alt a couple of years later. I was happy as a warlock, enjoying the scenery. With a refresh rate that gave me ample time to admire it before the next frame :D
Single-digit FPS can _absolutely_ be playable if you're a desperate enough ten-year-old...
Absolutely, sweet memories playing at less than 10fps using zsnes on a 486 dx2 by 1999...
I've only ever played Skyrim on a 2009 13" MacBook Pro in Wine. It took like 30min to load and ran at like 4fps. But I didn't play past the first area.
Wasn't AoE1 released for PPC Mac natively? AoE2 was probably the best Mac game ever.
> Single-digit FPS can _absolutely_ be playable if you're a desperate enough ten-year-old...
I have fond memories of playing Diablo II at 16 fps on an old (even at the time) PowerMac. I am not sure I could do it now.
>Single-digit FPS can _absolutely_ be playable if you're a desperate enough ten-year-old...
And somehow, more mesmerizing than games feels like playing now. To be a kid again.
Me trying to run Falcon 4.0 on a 166 mhz P1 with 16 mb of edo ram.
Flashbacks of gaming on an XP-era HP Pavilion with graphics so bad water didn’t even render in Halo 1 PC flood my mind.
Countless kids played Morrowind below par spec on family computers all across America.
That line triggered some deep memories of tweaking config files, dropping resolutions to something barely recognizable, and still calling it a win if the game technically ran
There's more comparisons with various Arm hardware and Cyberpunk 2077 over here: https://github.com/geerlingguy/raspberry-pi-pcie-devices/iss...
The DGX Spark and Mac Studio are currently the two best Arm-based platforms for running that game, it seems to like a lot of CPU to feed a decent GPU.
you're lucky you didn't get stuck with an S3 ViRGE.
Haha I remember how they couldn't even do transparency in MDK properly.
Was a bit faster than software (but hey I suppose if you weren't doing any transparency that makes it easier lol).
Everything changed with Voodoo Banshee’s
rarely has a company rested so hard on their laurels as 3dFX
Whatever, Glide was amazing! So much so that Nvidia bought them.
I remember what a huge difference it was having a dedicated 3D card capable of fast 2D and 3D vs the software rasterizer. Yes, NovaLogic games ran better. Yes, you can play Doom at decent FPS. Yes, SpecOps ran at full monitor resolution. They had a LOT to brag about.
Glide is precisely what made me hate 3dfx and was glad they died.
As a developer, I'm sure Glide was great.
But as a kid that really wanted a 3dfx Voodoo card for Christmas so I could play all the sweet 3D games that only supported Glide, I was upset when my dad got me a Rendition Verite 2200. But I didn't want to seem ungrateful, so my frustration was pointed to 3dfx for releasing a proprietary API.
I was glad that Direct3D and OpenGL quickly surpassed Glide's popularity.
But yeah, then 3dfx failed to innovate. IIRC, they lagged behind in 32-bit color rendering support as well as letting themselves get caught with their pants down when NVIDIA released the GeForce and introduced hardware transform which allowed the GPU to be more than just a texturing engine. I think that was the nail in 3dfx's coffin.
lol, agreed. Today, Glide feels like a predecessor to OpenGL. At the time it was awesome but as soon as DirectX came around along with OpenGL it was over. 1999 was the beginning of the NVidia train.
Thanks for the laugh about your disappointment with your dad. I had a similar thing happen with mine when I asked for Doom and him being a Mac guy, he came back with Bungie’s Marathon. I was upset until I played Marathon… I then realized how wise my father was.
I had no idea any of this stuff worked well enough to actually run modern games. The FEX emulation layer. The eGPU. It's not how fast this stuff runs that impresses me, it's that it runs at all.
FEX is very impressive.
Box64 was there first
And sometimes quite faster: https://printserver.ink/blog/box64-vs-fex/
eGPU in his case is just a pcie extension cable and slot, what's there to not work. There's no translation layer, pcie all the way
If the CPU is the bottleneck, an interesting metric would be how cheap of a GPU you can pair and still add value. I suspect you would have similar benchmarks with a 5060 as a 5090 in these tests.
For example, if you pair an N150 mini pc with a cheap AMD egpu (one of the laptop skus), you’ve made yourself tho equivalent of a gaming laptop in clamshell (with better cooling) on the cheap. A price vs fps curve, switching GPUs but keeping the mini pc as a constant, would be super interesting.
Overall, this feels less like "can you game on a Pi" and more like a practical stress test of today's ARM Linux gaming stack
Way back when I was young and broke, I played through Half Life 2 and the episodes on a ThinkPad T420 using an ExpressCard/34 PCMCIA to PCI with a graphics card I borrowed and an old crappy PSU I pulled from a business Dell desktop.
Managed to complete the games with decent graphics and framerate at the time. It wasn't an ideal setup, but I didn't care. In fact, I thought it was a cool hack to play games at the time without forking out a lot of money to build a gaming PC.
Maybe there are probably better options now to game than attaching a dedicated GPU with whatever hardware you already have, but I can verify that external GPUs are really cool and useful (though a 5090 is definitely not needed). You also don't have to care about cooling the GPU, since it's "atmosphere" cooled (though headphones and/or ANC are a must).
I tried a similar setup when in college, albeit with a X230 and a 1050Ti, and it worked amazing... for a few minutes at a time, since it blue-screened often.
I never managed to figure out the issue. The BSOD was something about a gpu timeout. It worked perfectly at home but shat the bed at the dorm. I assume there was some nasty interference there.
I have extemely weird bug where on Windows, many games crash quickly. My laptop is Lenovo legion 7i pro w/rtx4080.
I tried a lot of things, inclusing full windows reinstall, driver rollback, cleaning from dust etc etc. Crash reason is listed as "other" Nvidia driver error code.
Bazzite using Proton it works flawlessly. God of war,KCD2 and others. I guess, it will be Linux gaming for me from now on.
I am still puzzled why this situation even can be. If you have ideas, be my guest.
Maybe BIOS or other firmware? No clue, but my GPU also runs better on Linux that Windows today.
The more interesting metric is running HL2 on Pi4, Pi5 and 3588:
Pi4: 20 FPS same when using ffmpeg to stream to twitch. 5W
Pi5: 40 FPS idem as above. 10W
3588: 300+ FPS and rock solid 60 FPS streaming to twitch. 15W
So 5090 is not even interesting for gameplay. More polygons and larger textures do not make games more fun to play.
AAA has peaked and C++ does not even deliver interesting games any more. C#/Java are way better alternatives for modding.
the rockchip hw is really exciting lately, in general. i am using a bunch of them for streaming.
Aren't they GPL violators?
Seems that way, but if you want hardware, you have to buy the hardware that exists.
Is that a pi with a gfx card or a gfx card with a pi attached?
Yes.
Also, it doesn't seem like it would be all that much more expensive for these high end GPUs to start getting x86/64 SoCs with midrange specs baked in, and these AIO GPUs could be tailor made for standalone AI and gaming applications. If it's the equivalent of a $10 bit of gear in terms of cost, they could charge an additional $100 for the feature, with a SoC optimized for the specs of the GPU - get rid of the need for an eGPU altogether and stream from the onboard host?
Congratulations, you have invented AMD Instinct, except without the $100 part!
Sounds intriguing
This is very much a “would you like some coffee with all of your cream?” situation.
The last days truly has come! The world is upside down, and I'm seeing people inserting their computers into their GPUs.
I know this is just a joke. However, I think it's fairly obvious that the CPU would be the main bottleneck for every test. Still fun to measure things.
Before someone else points that out, you missed the opportunity to run Crysis and some schools of thought would consider any kind of gaming benchmark to be invalid due to its absence :)
Man, I miss that. Every PC build and benchmarking youtuber always went out of their way to keep that meme alive, even as it became increasingly irrelevant. Don't think I've seen anyone mention the phrase in years. :(
:C since you already took "does it run Crysis" I'll take the "does it run Skyrim" one so that Todd Howard can get more ideas.
Nice article and some interesting hints. Also, I enjoy your writing style.
Very much wish I had gotten a RTX 5090 for local LLMs but it would have doubled the cost of my PC.
Actually isn’t the cost of a Pi basically a rounding error compared to the 5090 at this point?
Is Raspberry Pi 5 16 GB the best one can get right now for tinkering?
Best if you need the RAM and are willing to pay, but it doesn't make as much sense as a mini PC for the same purpose (cost-wise), if you're doing normal computing tasks.
I think the sweet spot for the Pi 5 is 4GB (cost vs functionality you can use it for). But if you're like me, you don't care about value quite as much as fun/exploration. And for that, the more RAM, the merrier...
Huh so a Pi 5 is basically a Core 2 Quad according to Geekbench 6, that's fun. It was part of the recommended specs for GTA 4 back in the day, it should run great.
One advantage of DXVK in this regard is that it fixes a lot of the native stutter present on the PC version of GTA IV: https://youtu.be/aUIhtXzdeZY?t=218
The radxa SBC has gen3 X4 ?!?!...holy hell. Would have guessed X1
Interesting
Ho
CPU is more limiting when gaming than people think. If you have an older one.
What are we supposed to do with this information? It would have been more meaningful if the author tried the GPU card with an old machine, rather than a Raspberry Pi
> What are we supposed to do with this information?
Nothing. It’s just fun.
> t would have been more meaningful if the author tried the GPU card with an old machine, rather than a Raspberry Pi
But then it would have been lame. Who cares? If your old machine is a x86 less than 10 years old it’s most likely faster than the Pi. But that’s not the point. The point is to pair a cheap fun computer with a humongous and expensive card and see if it works. Because it’s fun.
personally I was interested in the capabilities of the pci-e bus as I abuse it in every other computer I can get my hands on (the rpi4 really did not like that treatment).
to your point about 'meaningful' though, indeed the ole College Try to run Crysis on a Samsung NC-10 would be far more glorious! But I assure you this was very fun for me.
Idk you can buy a new Pi for cheap and they're all the same, old machines vary and are not always available. I'm certainly not going to do it but it's not uninteresting imo