Factor in a proper PSU, storage, case and cooling, amd64 boxes are cheaper than Pis now. Going to replace a pi 3 server with one of those, not having to deal with ARM is a huge plus.
For most people with self-hosting tasks amd64 is back as the way to go.
As you say, there are a ton of "minipcs" on the market that directly compete with the Raspberry Pi on cost and power usage. They're typically slightly larger but the expansion options (bring your own RAM/storage) plus real I/O (with real PCIe), disk, etc IMO significantly outweighs this. They're also typically more performant and while aarch64 platform support is increasing dramatically there are still the occasions where there's a project, docker container, etc that doesn't support it.
Taking it a step further, there are a TON of decommissioned/recycled corporate/enterprise SFF desktops on the market. They don't compete in terms of size (13" x 15" or so) but they can actually get close in power usage. Many of them have multiple SATA ports, real NVMe, multiple real half-height PCIe slots, significantly better USB and PCIe bandwidth, etc.
With my project Willow and Willow Inference Server[0] we're trying to drive this approach in the self-hosting community with an initial emphasis on Home Assistant. They're generally sick of Raspberry PI supply shortages, very limited performance, poor I/O, flaky SD cards, etc. The Raspberry Pi is still pretty popular for "my first Home Assistant" but generally once people get bitten by the self-hosting bug they end up looking more like homelab very quickly.
For Willow particularly we emphasize use of GPUs because a voice assistant can't be waiting > 10 seconds to do speech recognition and speech synthesis. There are approaches out there trying to kind of get something working using Whisper tiny but in our ample internal testing and community feedback we feel that Whisper small is the bare minimum for voice assistant tasks, with many users going all out and using Whisper large-v2 at beam size 5. With GPU it's still so fast it doesn't really matter.
The Raspberry Pi is especially poorly suited for this use case (and even amd64). We have some benchmarks here[1]. TLDR a ~seven year old Tesla P4 (single slot, slot power only, half-height, used for $70) does speech recognition 87x faster, with the multiple increasing for more complex models and longer speech segments. A 3.8 second voice command takes 586ms on the Tesla P4 and 51 seconds on the Raspberry Pi 4. Even with the Pi 5 being twice as fast that's still 25 seconds, which is completely unusable. Not fair to compare GPU to Raspberry Pi but consider the economics and practicality...
You can get an SFF desktop and Tesla P4 from eBay for $200 shipped to your door. It will idle (with GPU and models loaded) at ~30 watts. The CPU, RAM, disk (NVMe), I/O, etc will walk all over a Raspberry Pi anything. Add the GPU and obviously it's not even close - you end up with a machine that can easily do 10x-100x what a Raspberry Pi can do for 2x the cost and power usage. You can even throw a 2.5gb Ethernet card in another slot for $20 and replace your router if you want to go really dense.
Even factoring in power usage (10-15w vs 30, 2-3x) the cost difference comes down to nearly nothing and for many users this configuration is essentially future-proof to anything they may want to do for many years (my system with everything running maxes out around 50% of one core). Many also gradually grew their self-hosted situation over the years with people ending up with three or more Raspberry Pis for different tasks (PiHole, Home Assistant, Plex, etc). At this point the SFF configuration starts to pull far head in every way including power usage.
Users were initially very skeptical to GPU use, likely from taking their experience in the desktop market and assuming things like "300 watt power usage with a huge > $500 card". Now they love having a GPU around for Willow and miscellaneous other CUDA tasks like encoding/decoding/transcoding with Plex/Jellyfin, accelerated Frigate, and all kinds of other applications. Willow Inference Server (depending on configuration) uses somewhere between 1-4GB of VRAM so with an 8GB VRAM card that leaves for plenty of additional tasks. We even have users who started with the Tesla P4 and then got the LLM bug and figured out how to get an RTX 3090 working with their setup which also of course leads to absurd performance with Willow - my local RTX 3090 goes from end of speech to command completion in HA to TTS feedback in ~250ms. It's "speak, blink, done" fast.
The lack of GPIOs problem in Mini PCs could be solved with a cheap external USB->GPIO adapter such as this one: https://www.hardkernel.com/shop/usb-io-board/
The above board was intended neither for the Raspberries nor for Mini PCs, but the code is Open Source and shouldn't be too hard to adapt.
In the Home Assistant community most GPIO and other duties have migrated to ESP devices and the excellent esphome[0]. I have at least 10 devices around my home and it's fantastic, I haven't wired up GPIO to a Pi or anything other than a $5 ESP8266/ESP32 for years.
Another instance of using the right tool for the job.
Yes, that would likely be an even better and cheaper solution if one doesn't need to drive GPIOs from a complex OS such as Linux which would require a fatter board than an ESP one.
Any leads on the ESP32-S3-BOX-3 SKU (the replacement for the ESP32-S3-BOX that Willow was developed for, IIUC)? I saw someplace say the only place to get it is Espressif's Aliexpress page for it, but they show out of stock for me.
The initial release of the BOX-3 was essentially a pre-production run with ESP-BOX similar 3D printed plastics.
The full production run of the BOX-3 from Espressif with proper injection molded plastics should become available from a retailer/distributor near you within the next couple of weeks.
The issue (among others) is we achieve the speech recognition performance we do largely thanks to ctranslate2[0]. They've gone on the record saying that they essentially have no interest in ROCm[1].
Of course with open source anything is possible but we see this as being one of several fundamental issues in supporting AMD GPGPU hardware.
^ This. I moved away from my Pi years ago to an old 2nd hand Mac mini I got cheap online.
Dealing with ARM has a huge pain for many things I did. Especially back then when docker and arm support was pretty limited.
But there were other times where ARM were an issue too. I just don't want to (and sometimes can't) compile things.
It's hard to go wrong with the Optiplex i5-6500 boxes that seem to have flooded the market recently. Amazon has them for $100 with 16GB RAM and a 256 SATA SSD. They have a NVME slot and PCIx16. They seem to be cheap, because that chip won't technically support Windows 11.
The nonstandard, low wattage power supplies combined with lack of room for full size dual slot (single PCIe x16 slot) cards limits the utility of these boxes :{
Sure, you can't fit a huge graphics card in there (power or space), but I'd argue that if running a full size graphics card and/or multiple 3.5" drives is your goal, a SFF isn't the right case. Pretty sure you can find an equivalent full-tower machine as well, though you do have the Dell proprietary PSU.
I had mine running for a bit with long SATA cables snaked out of the case to a couple 3.5 drives in a makeshift enclosure, but SATA/NVMe drives have gotten so cheap, it calls into question the need for all the power.
The Federal minimum wage has not changed since 2009, but the CPI captures effects like per-state minimums increasing, less people working minimum wage jobs, etc. No "adjust for inflation" calculation will capture the "pain" that every individual experiences from making a purchase, but this index is pretty close.
Depends what you need. You can still buy the old ones, and they are cheaper than at launch. But now have the option to buy a more capable one, for more money. If you're in the market for a v4 then they're getting cheaper.
I feel like they probably should diversify their offer and make both cheaper and more expensive models. Many people here including myself probably would love to have an even more powerful and less cheap model (e.g. I consider Raspberry Pi being a common standard extensible compact computer it's key selling point, don't care much about the price but often find myself handicapped by its CPU and IO performance while we all know similar type SoCs can do much do much better as many smartphones do), at the same time for many users and use cases it indeed probably should stay cheaper than it is becoming. Perhaps selling premium beefy models could have helped sponsoring cheaper ones.
They do have the zero, but it hasn't been updated in awhile and I wish it was. There are still tons of projects with benefit from these cheap microboards, even smaller than the pi. The Pis have become pretty powerful but also more expensive. It would be nice to have, like you're suggesting, a trade-off and it have a clearly good platform to use
Pi Zero: Nov 2015 (1x ARM1176JZF-S @ 1 GHz, 512 MB RAM)
Pi Zero 1.3: May 2016 (now you can use cameras)
Pi Zero W: Feb 2017 (Wifi and bluetooth 4.1)
Pi Zero WH: Jan 2018 (omg, soldering the gpio pins? Much wow)
Pi Zero 2 W: Oct 2021 (4x ARM Cortex-A53 @ 1Ghz, still 512 MB, now bluetooth 4.2)
I'm not at all convinced they are caring about this market. Realistically there have only been 3 models and there really hasn't been much push into this area. The Zero 2 upgrade wasn't anywhere near the leap that the normal pis are making. I know there is more limitations, but they also have more competitors and it isn't like the zeros are sitting on shelves. There's till a good market for <$20 computers (and especially for a $5 one)
I very much suspect they will be releasing 1GB or 2GB models later after the initial demand for it has died down and they can keep boards on shelves. Notice the the 1/2/4/8GB indicator on the board itself (and how they didn't say there won't be those models)
given their computation power now, how many people do you think would benefit from that offering vs older pis sold at a discount?
the main point for the price was to make it more accessible for kids, so that parents can buy one without thinking too much about the costs. the 1/2 GB may not support the desktop use cases that might be expected from the performance of the new pi.
i think the pi zero is now their main go-to device for the price-consious audience at this rate.
The pi zero _2_ still is on 0.5GB of ram. 2GB would be a big step up from that. Use cases like running home assistant or unify network controller don't work below 1G, and would benefit from faster cpu and storage of the Pi 5
reply