> The 4.5-mm-square chip, developed using Korean tech giant Samsung Electronics Co.'s 28 nanometer process, has 625 times less power consumption compared with global AI chip giant Nvidia's A-100 GPU, which requires 250 watts of power to process LLMs, the ministry explained.
>processes GPT-2 with an ultra-low power consumption of 400 milliwatts and a high speed of 0.4 seconds
Not sure what's the point on comparing the two, an A100 will get you a lot more speed than 2.5 tokens/sec. GPT 2 is just a 1.5B param model, a Pi 4 would get you more tokens per second with just CPU inference.
Still, I'm sure there's improvements to be made and the direction is fantastic to see, especially after Coral TPUs have proven completely useless for LLM and whisper acceleration. Hopefully it ends up as something vaguely affordable.
The part where they have like 3 bytes of memory so you switch from extremely high latency of RAM to laughably sluggish latency of USB serial. I think there's also no support below 8 bit quants, which you'd really need.
Well, our brains are closer to spiking neural networks than 'regular' neural networks. And they work pretty well. For the most part.
I feel like SNNs are like Brazil - they are the future, and shall remain so. I think more basic research is needed for them to mature. AFAIK the current SOTA is to train them with 'surrogate gradients', which shoe-horn them into the current NN training paradigm, and that sort of discards some of their worth. Have biologically-inspired learning rules, like STDP, _really_ been exhausted?
If OpenAI or DeepMind makes such claim I'd pay attention. Otherwise it's always some (usually hw) guys trying to get a grant, or even just publish a paper.
p.s. People interested in biologically inspired data processing algorithms should look at Numenta's papers (earlier ones, because recently they switched to regular deep learning), and especially learn their justification for not using spikes.
Spiking neural networks are not software, they are usually built directly into silicon chips because they are using pulse timing to encode information instead of multiple bits. The problem is that training them is difficult because they operate over time, not that they don't work. As of now, scaling training infrastructure is more important than theoretical power efficiency.
Neuromorphic computing basically uses individual "neurons", represented with either analog or digital circuits, which communicate using asynchronous pulses called "spikes". Unlike the human brain, neuromorphic chips are 2D, but we can replicate a good amount of neural dynamics in silicon.
It's unclear how they managed to use this to run LLMs, though. Getting GPT-2 running with SNNs is a legitimate achievement, because SNNs have traditionally lagged significantly behind conventional deep learning architectures.
They also released their API a week or 2 ago. Its significantly faster than anything from OpenAI right now. Mixtral 8x7b operates at around 500 tokens per second. https://groq.com/
It's not so much an accelerator as it is addressing the main inference bottleneck (i.e. memory latency) with sheer brute force by throwing money at the problem. They've made accelerators out of pure L3 cache with a whopping 230 MB per card. They cited something like 500 cards to load one single Mixtral instance, which probably cost over $10M to build. It's a supercomputer essentially.
Or to put it another way: they’ve made a compute substrate with the correct ratios of processing power to memory capacity.
NVIDIA GPUs were optimised for different workloads, such as 3D rendering, that have different optimal ratios.
This “supercomputer” isn’t brute force or wasteful because it allows more requests per second. By having each response get processed faster it can pipeline more of them through per unit time and unit silicon area.
IMO we still need an MLPerf submission or similar to really understand if this is more efficient or more efficient only if you also want to minimize latency.
Nvidia has pulled enough rabbits out of the hat when it comes to MLPerf I’m still not convinced they can’t work some CUDA magic and undercut them on efficiency.
I'm pretty sure $20,000 per LPU isn't actually the cost of these LPUs. I saw someone else on HN asking if $20,000 could get them something and an employee said to reach out. Which makes me think $20,000 is enough to get some sort of model running at least, even if it's not necessarily an LLM.
Another consideration: Even if it's slightly more expensive, that can be OK if you care about inference speed. I'd pay 50% more for GPT-4 if it could deliver results that quick.
Based on some rough ballpark conservative estimates (one server with 2 A100 at $50000; 50 tokens/s one one of those servers; so 10 of those servers), upfront cost with consumer hardware seems to be 1/10 to 1/20 of what the Groq hardware costs. I would guess that realistically cloud providers can probably achieve half to 1/3 of that price
So unless you need the fast latency of Groq, consumer hardware seems to be a lot cheaper for the same thoughput.
Neuromorphic computing is cool, but not new tech. However, using a neuromorphic spiking architecture to run LLMs seems new. Unfortunately, there doesn't seem to be a paper associated with this work, so there's no deeper information on what exactly they're doing.
The article says they ran GPT-2! Which isn't particularly large, but replicating a large language model with a spiking neural network seems like novel work at least.
Quick shoutout to https://youtube.com/@TechTechPotato for those interested in keeping tabs on the AI hardware space. There is much more going on in this area than you would think if you only follow general media.
>processes GPT-2 with an ultra-low power consumption of 400 milliwatts and a high speed of 0.4 seconds
Not sure what's the point on comparing the two, an A100 will get you a lot more speed than 2.5 tokens/sec. GPT 2 is just a 1.5B param model, a Pi 4 would get you more tokens per second with just CPU inference.
Still, I'm sure there's improvements to be made and the direction is fantastic to see, especially after Coral TPUs have proven completely useless for LLM and whisper acceleration. Hopefully it ends up as something vaguely affordable.
reply