The review shows ARM64 software support is still painful vs x86. For $200 for the 16gb model, this is the price point where you could just get an Intel N150 mini PC in the same form factor. And those usually come with cases. They also tend to pull 5-8w at idle, while this is 15w. Cool if you really want ARM64, but at this end of the performance spectrum, why not stick with the x86 stack where everything just works a lot easier?
From the article: "[...] the Linux support for various parts of the boards, not being upstreamed and mainlined, is very likely to be stuck on an older version. This is usually what causes headaches down the road [...]".
The problem isn't support for the ARM architecture in general, it's the support for this particular board.
Other boards like the Raspberry Pi and many boards based on Rockchip SoCs have most of the necessary support mainlined, so the experience is quite painless. Many are starting to get support for UEFI as well.
So, I agree but less than I did a few months ago. I purchased an Orange Pi 5 Ultra and was put off by the pre-built image and custom kernel. The “patch“ for the provided kernel was inscrutable as well. Now I’m running a vanilla 6.18 kernel on a vanilla uboot firmware (still a binary blob required to build that though) with a vanilla install of Debian. That support includes the NPU, GPU, 2.5G Ethernet and NVMe root/boot. I don’t have performance numbers but it’s definitely fast enough for what I use it for.
My uninformed normie view of the ecosystem suggests that it's the support for almost every particular board, and that's exactly the issue. For some reason, ARM devices always have some custom OS or Android and can't run off-the-shelf Linux. Meanwhile you can just buy an x86/amd64 device and assume it will just work. I presume there is some fundamental reason why ARM devices are so bad about this? Like they're just missing standardization and every device requires some custom firmware to be loaded by the OS that's inevitably always packaged in a hacky way?
This has often been the case in the past but the situation is much improved now.
For example I have an Orange Pi 5 Plus running the totally generic aarch64 image of Home Assistant OS [0]. Zero customization was needed, it just works with mainline everything.
There's even UEFI [1].
Granted this isn't the case for all boards but Rockchip at least seems to have great upstream support.
Its the kernel drivers, not firmware. There is no bios or acpi, so the kernel itself has to support a specifc board. In practice it means there is a dtb file that configures it and the actual drivers in the kernel.
Manufacturers hack it together, flash to device and publish the sources, but dont bother with upstreaming and move on.
Same story as android devices not having updates two years after release.
But "no BOIS or ACPI" and requiring the kernel to support each individual board sounds exactly like the problem is the ARM architecture in general. Until that's sorted it makes sense to be wary of ARM.
With this board the SoC is the main problem.
CIX is working on mainlining that stuff for over a year and we still dont have gpu and npu support in mainline
That makes sense, as the Pi is as easy as x86 at this point. I almost never have to compile from scratch.
I'm not a compiler expert... But it seems each ARM64 board needs its own custom kernel support, but once that is done, it can support anything compiled to ARM64 as a general target? Or will we still need to have separate builds for RPi, for this board, etc?
Why? I'm running an Orange Pi 5+ with a fully generic aarch64 image of Home Assistant OS and it works great. Is there some particular feature that doesn't work on mainline?
This. The issue is the culture inside many of these HW companies that is oppositional to upstreaming changes and developing in the open in general.
Often an outright mediocre software development culture generally, that sees software as a pure cost centre, in fact. The "product" is seem to be the chip, the software "just" a side show (or worse, a channel by which their IP could leak).
The Rockchip stuff is better, but still has similar problems.
These companies need to learn that their hardware will be adopted more aggressively for products if the experience of integrating with it isn't sub-par.
With RAM it will be costing notably more, with 4 cores instead of 12. I'd expect this to run circles around an N150 for single-threaded perf too.
They are not in the same class, which is reflected in the power envelope.
BTW what's up with people pushing N150 and N300 in every single ARM SBC thread? Y'all Intel shareholders or something? I run both but not to the exclusion of everything else. There is nothing I've failed to run successfully on my ARM ones and the only thing I haven't tried is gaming.
Because most ARM SBCs are still limited to whatever linux distro they added support to. Intel SBCs might underperform but you can be sure it will run anything built for x86-64.
Are you sure you don't have single-threaded and multi-threaded backwards?
Why would the A720 at 2.8 GHz run circles around the N150 that boosts up to 3.6 GHz in single-threaded workloads, while the 12-core chip would wouldn't beat the 4-core chip in multithreaded workloads?
I can't speak to why other people bring up the N150 in ARM SBC threads any more than "AMD doesn't compete in the ~$200 SBC segment".
FWIW, as far as SBC/NUCs go, I've had a Pi 4, an RK3399 board, an RK3568 board, an N100 NUC from GMKTec, and a N150 NUC from Geekom, and the N150 has by far been my favorite out of those for real-world workloads rather than tinkering. The gap between the x86 software ecosystem and the ARM software ecosystem is no joke.
P.S. Stay away from GMKTec. Even if you don't get burned, your SODIMM cards will. There are stoves, ovens, and hot plates with better heat dissipation and thermals than GMKTec NUCs.
1. Wow, never thought I'd need to do an investment disclosure for an HN comment. But sure thing: I'm sure Intel is somewhere in my 401K's index funds, but also probably Qualcomm. But I'm not a corporate shill, thank you very much for the good faith. Just a hobbyist looking to not get seduced by the lastest trend. If I were an ARM developer that'd be different, I get that.
2. The review says single core Geekbench performance is 1290, same as i5-10500 which is also similar to N150, which is 1235.
3. You can still get N150s with 16gb ram in a case for $200 all in.
But realistically, most workloads of the kind you would run on these machines don't scale benefit from multithreading as much as single core performance. At least at home these machines will do things like video streaming, router, or serving files. Even if you want to use it in the living room as a console/emulator, you are better off with higher single core performance and fewer cores than the opposite.
> But realistically, most workloads of the kind you would run on these machines don't scale benefit from multithreading as much as single core performance. At least at home these machines will do things like video streaming, router, or serving files.
You're probably right about "most workloads", but as a single counter-example, I added several seasons of shows to my N305 Plex server last night, and it pinned all eight threads for quite a while doing its intro/credit detection.
I actually went and checked if it would be at all practical to move my Plex server to a VM on my bigger home server where it could get 16 Skymont threads (at 4.6ghz vs 8 Gracemont threads at ~3ghz - so something like 3x the multithreaded potential on E-cores). Doesn't really seem workable to use Intel Quick Sync on Linux guests with a Hyper-V host though.
if you are talking about ancient hardware, yes, it's mostly driven by single core performance. But any console more recent than the 2000s will hugely benefit from multiple cores (because of the split between CPU and GPU, and the fact that more recent consoles also had multiple cores, too).
Agreed, at least for a likely "home use" case, such as a TV box, router, or general purpose file server or Docker host, I don't see how this board is better than something like a Beelink mini PC. The Orange Pi does not even come with a case, power supply or cooler. Contrast that with a Beelink that has a built-in power supply (no external brick) and of course a case and cooler.
Yes x86 will win for convenience on about every metric (at least for now), but this SoC's CPU is much faster than a mere Intel N150 (especially for multicore use cases).
I've got two RK3588 boards here doing Linux-y things around my place (Jellyfin, Forgejo builders, Immich, etc) and ... I don't think I've run into pain? They're running various debian and just ... work? I can't think of a single package that I couldn't get for ARM64.
Likewise my VPS @ Hetzner is running Aarch64. No drama. Only pain is how brutal the Rust cross-compile is from my x86 machine.
(Those things are expensive, but I just ordered one [the ASUS variant] for myself.)
Meanwhile Apple is pushing the ARM64 architecture hard, and Windows is apparently actually quite viable now?
Personally... it's totally irrational, but I have always had a grudge against x86 since it "won" in the early 90s and I had to switch from 68k. I want diversity in ISAs. RISC-V would be nice, but I'll settle for ARM for now.
I think the sweet spot for ARM SBCs are smaller, less powerful and cheaper for headless IOT edge cases. I use a couple of them that way when I need LAN connectivity, either by ethernet or wifi, and things wired to GPIO pins. I don't need a powerful CPU or lots of RAM for that. The SBC makers are caught up in a horsepower race and I just shrug, it's not for me.
When something has an 30 TOPS NPU, what are the implications? Do NPUs like this have some common backend that ggml/llama.cpp targets? Is it proprietary and only works for some specific software? Does it have access to all the system RAM and at what bandwidth?
I know the concept has been around for a while but no idea if it actually means anything. I assume that people are targeting ones in common devices like Apple, but what about here?
The specific NPU doesn't seem to be mentioned in TFA, but my guess is that the blessed way to deal with it is the Neon SDK: https://www.arm.com/technologies/neon
I've not found Neon to be fun or easy to use, and I frequently see devices ignoring the NPU and inferring on CPU because it's easier. Maybe you get lucky and someone has made a backend for something specific you want, but it's not common.
Can't speak to this specific NPU but these kind of accelerators are really made more for more general ML things like machine vision etc. For example while people have made the (6 TOPS) NPU in the (similar board) RK3588 work with llama.cpp it isn't super useful because of the RAM constraints. I believe it has some sort of 32-bit memory addressing limit, so you can never give it more than 3 or 4 GB for example. So for LLMs, not all that useful.
It needs specific support, and for example llama.cpp would have support for some of them. But that comes with limitations in how much RAM they can allocate. But when they work, you see a flat CPU usage and the NPU does everything for inference.
I have a software that need to build aarch64 (for some aarch64 box with 4 core cpu), currently using Oracle cloud's 4core24G Arm neoverse n1 as github self host runner to build it.
Seems this machine is more powerful than it, definitely attractive to me for a physical aarch64 self host runner.
How are we still in a world where there are breathless, hand-waving blog posts written about the theoretical potential of super-fast SBCs for which the manufacturer shows fuck all interest in competent OS support?
Yet again, OrangePi crank out half-baked products and tech enthusiasts who quite understandably lack the deep knowledge to do more than follow others' instructions on how to compile stuff talk about it as if their specifications actually matter.
Yet again the HN discourse will likely gather around stuff like "why not just use an N1x0" and side quests about how the Raspberry Pi Foundation has abandoned its principles / is just a cynical Broadcom psyop / is "lagging behind" in hardware.
This stuff can be done better and the geek world should be done excusing OrangePi producing hardware abandonware time after time. Stop buying this crap and maybe they will finally start focussing on doing more than shipping support for one or two old kernels and last year's OS while kicking vague commitments about future support just far enough down the road that they can release another board first.
Please stop falling for it :-/
ETA: I think what grinds my gears the most is that OrangePi, BananaPi etc., are largely free-riding off the Linux community while producing products that only "beat" the market-defining manufacturers (Raspberry Pi, BeagleBoard) because they treat software support as an uncosted externality.
This kind of "build it and they will use it" logic works well for microcontrollers, where a manufacturer can reasonably expect to produce a chip with a couple of tech demos, a spec sheet and a limited C SDK and people will find uses for it.
But for "near-desktop class" SBCs it is not much better than misrepresentation. Consequently these things are e-waste in a way that even the global desk drawer population of the Raspberry Pi does not reach.
And yet they are graded on a curve and never live up to their potential.
Yet another board which will never have proper upstream support because the SoC vendor refused to implement the ARM BSA standard which would provide EFI/ACPI support instead of relying on undiscoverable devices only exposed through device tree. ACPI isn't perfect but it's way better than device trees which are seldom updated so the device will remain stuck with old kernels.
The review shows ARM64 software support is still painful vs x86. For $200 for the 16gb model, this is the price point where you could just get an Intel N150 mini PC in the same form factor. And those usually come with cases. They also tend to pull 5-8w at idle, while this is 15w. Cool if you really want ARM64, but at this end of the performance spectrum, why not stick with the x86 stack where everything just works a lot easier?
From the article: "[...] the Linux support for various parts of the boards, not being upstreamed and mainlined, is very likely to be stuck on an older version. This is usually what causes headaches down the road [...]".
The problem isn't support for the ARM architecture in general, it's the support for this particular board.
Other boards like the Raspberry Pi and many boards based on Rockchip SoCs have most of the necessary support mainlined, so the experience is quite painless. Many are starting to get support for UEFI as well.
So, I agree but less than I did a few months ago. I purchased an Orange Pi 5 Ultra and was put off by the pre-built image and custom kernel. The “patch“ for the provided kernel was inscrutable as well. Now I’m running a vanilla 6.18 kernel on a vanilla uboot firmware (still a binary blob required to build that though) with a vanilla install of Debian. That support includes the NPU, GPU, 2.5G Ethernet and NVMe root/boot. I don’t have performance numbers but it’s definitely fast enough for what I use it for.
My uninformed normie view of the ecosystem suggests that it's the support for almost every particular board, and that's exactly the issue. For some reason, ARM devices always have some custom OS or Android and can't run off-the-shelf Linux. Meanwhile you can just buy an x86/amd64 device and assume it will just work. I presume there is some fundamental reason why ARM devices are so bad about this? Like they're just missing standardization and every device requires some custom firmware to be loaded by the OS that's inevitably always packaged in a hacky way?
This has often been the case in the past but the situation is much improved now.
For example I have an Orange Pi 5 Plus running the totally generic aarch64 image of Home Assistant OS [0]. Zero customization was needed, it just works with mainline everything.
There's even UEFI [1].
Granted this isn't the case for all boards but Rockchip at least seems to have great upstream support.
[0]: https://github.com/home-assistant/operating-system/releases
[1]: https://github.com/edk2-porting/edk2-rk3588
Its the kernel drivers, not firmware. There is no bios or acpi, so the kernel itself has to support a specifc board. In practice it means there is a dtb file that configures it and the actual drivers in the kernel.
Manufacturers hack it together, flash to device and publish the sources, but dont bother with upstreaming and move on.
Same story as android devices not having updates two years after release.
But "no BOIS or ACPI" and requiring the kernel to support each individual board sounds exactly like the problem is the ARM architecture in general. Until that's sorted it makes sense to be wary of ARM.
It is more or less like wifi problem on laptops, but multiplied by the number of chips. In a way it's more of a lunux problem than arm problem.
At some point the "good" boards get enough support and the situation slowly improves.
We reached the state where you dont need to spec-check the laptop if you want to run linux on it, the same will happen to arm sbc I hope.
With this board the SoC is the main problem. CIX is working on mainlining that stuff for over a year and we still dont have gpu and npu support in mainline
That makes sense, as the Pi is as easy as x86 at this point. I almost never have to compile from scratch.
I'm not a compiler expert... But it seems each ARM64 board needs its own custom kernel support, but once that is done, it can support anything compiled to ARM64 as a general target? Or will we still need to have separate builds for RPi, for this board, etc?
Apart from very rare cases, this will run any linux-arm64 binary.
I still have to run my own build of kernel on Opi5+, so that unfortunately tracks. At least I dont have to write the drivers this decade
Why? I'm running an Orange Pi 5+ with a fully generic aarch64 image of Home Assistant OS and it works great. Is there some particular feature that doesn't work on mainline?
This. The issue is the culture inside many of these HW companies that is oppositional to upstreaming changes and developing in the open in general.
Often an outright mediocre software development culture generally, that sees software as a pure cost centre, in fact. The "product" is seem to be the chip, the software "just" a side show (or worse, a channel by which their IP could leak).
The Rockchip stuff is better, but still has similar problems.
These companies need to learn that their hardware will be adopted more aggressively for products if the experience of integrating with it isn't sub-par.
With RAM it will be costing notably more, with 4 cores instead of 12. I'd expect this to run circles around an N150 for single-threaded perf too.
They are not in the same class, which is reflected in the power envelope.
BTW what's up with people pushing N150 and N300 in every single ARM SBC thread? Y'all Intel shareholders or something? I run both but not to the exclusion of everything else. There is nothing I've failed to run successfully on my ARM ones and the only thing I haven't tried is gaming.
Because most ARM SBCs are still limited to whatever linux distro they added support to. Intel SBCs might underperform but you can be sure it will run anything built for x86-64.
No idea - the ryzen based ones are better!
Are you sure you don't have single-threaded and multi-threaded backwards?
Why would the A720 at 2.8 GHz run circles around the N150 that boosts up to 3.6 GHz in single-threaded workloads, while the 12-core chip would wouldn't beat the 4-core chip in multithreaded workloads?
Obviously, the Intel chip wins in single-threaded performance while losing in multi-threaded: https://www.cpubenchmark.net/compare/6304vs6617/Intel-N150-v...
I can't speak to why other people bring up the N150 in ARM SBC threads any more than "AMD doesn't compete in the ~$200 SBC segment".
FWIW, as far as SBC/NUCs go, I've had a Pi 4, an RK3399 board, an RK3568 board, an N100 NUC from GMKTec, and a N150 NUC from Geekom, and the N150 has by far been my favorite out of those for real-world workloads rather than tinkering. The gap between the x86 software ecosystem and the ARM software ecosystem is no joke.
P.S. Stay away from GMKTec. Even if you don't get burned, your SODIMM cards will. There are stoves, ovens, and hot plates with better heat dissipation and thermals than GMKTec NUCs.
1. Wow, never thought I'd need to do an investment disclosure for an HN comment. But sure thing: I'm sure Intel is somewhere in my 401K's index funds, but also probably Qualcomm. But I'm not a corporate shill, thank you very much for the good faith. Just a hobbyist looking to not get seduced by the lastest trend. If I were an ARM developer that'd be different, I get that.
2. The review says single core Geekbench performance is 1290, same as i5-10500 which is also similar to N150, which is 1235.
3. You can still get N150s with 16gb ram in a case for $200 all in.
> review says single core Geekbench performance is 1290, same as i5-10500 which is also similar to N150, which is 1235.
Single core, yes. Multi core score is much higher for this SBC vs the N150.
But realistically, most workloads of the kind you would run on these machines don't scale benefit from multithreading as much as single core performance. At least at home these machines will do things like video streaming, router, or serving files. Even if you want to use it in the living room as a console/emulator, you are better off with higher single core performance and fewer cores than the opposite.
> But realistically, most workloads of the kind you would run on these machines don't scale benefit from multithreading as much as single core performance. At least at home these machines will do things like video streaming, router, or serving files.
You're probably right about "most workloads", but as a single counter-example, I added several seasons of shows to my N305 Plex server last night, and it pinned all eight threads for quite a while doing its intro/credit detection.
I actually went and checked if it would be at all practical to move my Plex server to a VM on my bigger home server where it could get 16 Skymont threads (at 4.6ghz vs 8 Gracemont threads at ~3ghz - so something like 3x the multithreaded potential on E-cores). Doesn't really seem workable to use Intel Quick Sync on Linux guests with a Hyper-V host though.
> in the living room as a console/emulator,
if you are talking about ancient hardware, yes, it's mostly driven by single core performance. But any console more recent than the 2000s will hugely benefit from multiple cores (because of the split between CPU and GPU, and the fact that more recent consoles also had multiple cores, too).
It allows you to build for what is coming. In a couple of years arm hardware this powerful will cheap and common.
Agreed, at least for a likely "home use" case, such as a TV box, router, or general purpose file server or Docker host, I don't see how this board is better than something like a Beelink mini PC. The Orange Pi does not even come with a case, power supply or cooler. Contrast that with a Beelink that has a built-in power supply (no external brick) and of course a case and cooler.
This OrangePi 6 Plus board comes with cooling and a power supply (usb-c). No case, though.
Yes x86 will win for convenience on about every metric (at least for now), but this SoC's CPU is much faster than a mere Intel N150 (especially for multicore use cases).
I've got two RK3588 boards here doing Linux-y things around my place (Jellyfin, Forgejo builders, Immich, etc) and ... I don't think I've run into pain? They're running various debian and just ... work? I can't think of a single package that I couldn't get for ARM64.
Likewise my VPS @ Hetzner is running Aarch64. No drama. Only pain is how brutal the Rust cross-compile is from my x86 machine.
I mean, here's Geerling running a bunch of Steam games flawlessly on a Aarch64 NVIDIA GB10 machine: https://www.youtube.com/watch?v=FjRKvKC4ntw
(Those things are expensive, but I just ordered one [the ASUS variant] for myself.)
Meanwhile Apple is pushing the ARM64 architecture hard, and Windows is apparently actually quite viable now?
Personally... it's totally irrational, but I have always had a grudge against x86 since it "won" in the early 90s and I had to switch from 68k. I want diversity in ISAs. RISC-V would be nice, but I'll settle for ARM for now.
I think the sweet spot for ARM SBCs are smaller, less powerful and cheaper for headless IOT edge cases. I use a couple of them that way when I need LAN connectivity, either by ethernet or wifi, and things wired to GPIO pins. I don't need a powerful CPU or lots of RAM for that. The SBC makers are caught up in a horsepower race and I just shrug, it's not for me.
When something has an 30 TOPS NPU, what are the implications? Do NPUs like this have some common backend that ggml/llama.cpp targets? Is it proprietary and only works for some specific software? Does it have access to all the system RAM and at what bandwidth?
I know the concept has been around for a while but no idea if it actually means anything. I assume that people are targeting ones in common devices like Apple, but what about here?
The specific NPU doesn't seem to be mentioned in TFA, but my guess is that the blessed way to deal with it is the Neon SDK: https://www.arm.com/technologies/neon
I've not found Neon to be fun or easy to use, and I frequently see devices ignoring the NPU and inferring on CPU because it's easier. Maybe you get lucky and someone has made a backend for something specific you want, but it's not common.
Can't speak to this specific NPU but these kind of accelerators are really made more for more general ML things like machine vision etc. For example while people have made the (6 TOPS) NPU in the (similar board) RK3588 work with llama.cpp it isn't super useful because of the RAM constraints. I believe it has some sort of 32-bit memory addressing limit, so you can never give it more than 3 or 4 GB for example. So for LLMs, not all that useful.
It needs specific support, and for example llama.cpp would have support for some of them. But that comes with limitations in how much RAM they can allocate. But when they work, you see a flat CPU usage and the NPU does everything for inference.
I have a software that need to build aarch64 (for some aarch64 box with 4 core cpu), currently using Oracle cloud's 4core24G Arm neoverse n1 as github self host runner to build it.
Seems this machine is more powerful than it, definitely attractive to me for a physical aarch64 self host runner.
Unfortunately, this board seems to be using the CIX CPU that has power management issues:
> 15W at idle, which is fairly high
e-*ing-waste if you have to wait for manufacturer to provide supported images.
Upstream the drivers to the mainline kernel or go bankrupt. Nobody should buy these.
The half baked hardware comments are humerous, because pretty much any piece of software is half baked if we are lucky.
Why bother with these obscure boards with spotty software support when you can get a better deal all around with an x86 mini PC with a N150 CPU?
How are we still in a world where there are breathless, hand-waving blog posts written about the theoretical potential of super-fast SBCs for which the manufacturer shows fuck all interest in competent OS support?
Yet again, OrangePi crank out half-baked products and tech enthusiasts who quite understandably lack the deep knowledge to do more than follow others' instructions on how to compile stuff talk about it as if their specifications actually matter.
Yet again the HN discourse will likely gather around stuff like "why not just use an N1x0" and side quests about how the Raspberry Pi Foundation has abandoned its principles / is just a cynical Broadcom psyop / is "lagging behind" in hardware.
This stuff can be done better and the geek world should be done excusing OrangePi producing hardware abandonware time after time. Stop buying this crap and maybe they will finally start focussing on doing more than shipping support for one or two old kernels and last year's OS while kicking vague commitments about future support just far enough down the road that they can release another board first.
Please stop falling for it :-/
ETA: I think what grinds my gears the most is that OrangePi, BananaPi etc., are largely free-riding off the Linux community while producing products that only "beat" the market-defining manufacturers (Raspberry Pi, BeagleBoard) because they treat software support as an uncosted externality.
This kind of "build it and they will use it" logic works well for microcontrollers, where a manufacturer can reasonably expect to produce a chip with a couple of tech demos, a spec sheet and a limited C SDK and people will find uses for it.
But for "near-desktop class" SBCs it is not much better than misrepresentation. Consequently these things are e-waste in a way that even the global desk drawer population of the Raspberry Pi does not reach.
And yet they are graded on a curve and never live up to their potential.
Yet another board which will never have proper upstream support because the SoC vendor refused to implement the ARM BSA standard which would provide EFI/ACPI support instead of relying on undiscoverable devices only exposed through device tree. ACPI isn't perfect but it's way better than device trees which are seldom updated so the device will remain stuck with old kernels.
Devicetree continues to be a massive crutch for arm soc vendors.