Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Spiking neural networks are not software, they are usually built directly into silicon chips because they are using pulse timing to encode information instead of multiple bits. The problem is that training them is difficult because they operate over time, not that they don't work. As of now, scaling training infrastructure is more important than theoretical power efficiency.


sort by: page size:

I'm pretty surprised by all the focus on spiking neural networks too; they are really still much more of an academic topic and I have never heard of any real practical applications (yet).

There are a number of methods of training these kinds of networks like Spike Timing Dependent Plasticity (STDP) which is essentially a reinforcement learning algorithm that increases the weights between neurons that spike often together and modulates the increases by a reward signal. However, all these methods are really more focused on replicating and modeling the biological phenomena and not being performant.

In theory, spiking networks should be much more efficient, real neurons are able to take advantage of many non-linear effects to create incredibly complex analog to digital (spikes) computation in every cell using incredibly little energy - our brains only use about 20 watts of power.

But, the current approach to CPUs and even GPUs means simulating all this stuff is ridiculously inefficient and ultimately looks nothing like the real thing.


Yeah, spiking neural networks are a tough nut to crack. Check out Nengo if you're interested in learning more

The brain doesn't use a synchronous digital architecture. It is asynchronous. Spiking neural networks implemented in neuromorohic hardware are equally efficient. They consume milliwatts for a million neurons.

The main problem with SNNs is not that they don't work well. It's not that they are slow to run on CPUs/GPUs. Those would indeed be temporary problems. The fundamental problem is we don't know the right abstraction level to imitate brain operations. It's far from clear that simulating spikes is necessary to implement brain's "learning algorithms". There are two main arguments for using spikes: "biological realism", and "energy efficiency". Neither is convincing: if you want to simulate a conventional CPU you don't want to simulate IV curves for transistors. You don't want to simulate CMOS gates. No, because the lowest abstraction level necessary to understand the computer operation is boolean logic. Anything below that is completely irrelevant. I strongly suspect that simulating spikes is below the lowest relevant abstraction level. By the way, Jeff Hawkins of Numenta agrees with me, and he's pretty strict about "biological plausibility" of his neocortex algorithms. As for energy efficiency - sure, spikes might be the most efficient way to compute and encode information given the constraints of biological brains. But why should we care about those constraints? We are building computing machinery in silicon, using vastly different technology, circuits, and tricks from what's available in "wetware". It does not make any sense to copy biological evolution tricks to improve efficiency when building things in silicon. Neuromorphic hardware papers always mention energy efficiency and then proceed to compare their analog spiking chips to digital (!) chips. That's ridiculous! I can't think of a single good argument why would spiking analog circuits be more energy efficient than non-spiking analog circuits, if we are talking about any existing computing hardware technology (or one that's likely to be developed in the foreseeable future).

Deep learning took off in 2012 not because faster hardware allowed us to develop good algorithms. The algorithms (gradient descent optimization, backpropagation, convolutional and recurrent layers) have been developed -- using slow hardware -- long before (Lecun demonstrated state of the art on MNIST back in 1998). The fast hardware allowed us to scale up the existing good algorithms. I don't see any good algorithms developed for SNNs. Perhaps these algorithms can be developed, perhaps faster hardware is indeed necessary, but as I argue above, the motivation to pursue this research is just not obvious to me.

Note that we shouldn't confuse this SNN research (such as the papers you cited), to efforts like Human Brain Project, where they're actually trying to derive higher level brain algorithms from accurate simulations of low level mechanics. Emphasis on accurate, because as any neuroscientist will tell you (e.g. [1]), these SNNs have very little to do with what's actually going on in a brain on any abstraction level.

[1] https://spectrum.ieee.org/tech-talk/semiconductors/devices/b...


I am honestly shocked that nobody mentioned spiking neural networks which are capable of continuous learning. They are much harder to train because conventional neural networks can be differentiated automatically and after that you just dump more hardware at the problem. The upside of spiking neural network accelerators is that they use impossibily low amounts of energy.

From what little I've gathered about the subject, I think so. Spiking neural networks would be orders of magnitude more energy efficient than deep neural networks for image and audio processing. The main hurdle lies in designing network structure and weights; the cost function of a classical neural network can be differentiated using basic calculus, but optimizing a spiking neural network is not as easy.

This hardware is designed for running spiking neural network models? What is the current state of the art in this field? I was under the impression that training a spiking neural network was somewhat of an unsolved problem, because backprop doesn't easily apply. Anyone have information on this?

Here's Geoff Hinton discussing spiking neural nets: https://youtu.be/2EDP4v-9TUA?si=E4D5YNGQGdYSiTIy — from an interview with Pieter Abbeel in June 2022.

From the paper:

> Spiking Neural Networks (SNNs) have been an attractive option for deployment on devices with limited computing resources and lower power consumption because of the event-driven computing characteristic.

But training them requires new techniques compared to continuous neural networks — SNNs aren't differentiable and therefore you can't back-propagate (as I understand it; please correct me if I'm off here).


I can tell you about the software side on which I have some experience. Spiking neural networks are strictly more powerful than conventional neural networks which in turn are strictly more powerful than hand-coded rules engines. So the idea is that neuromorphic systems will some day supplant conventional neural networks in the same way they supplanted rules engines (e.g., for machine translation). However, as it stands the theoretical benefits of neuromorphic hardware has not yet been proven. Which is perhaps because hardware and software needs to mature... Like how neural nets were thought of as toys for many decades before they became practical. More brainlike doesn't necessarily translate to higher performance.

I'd say the big catch is the huge advantage conventional neural nets have. In hardware and software support.


“In neuromorphic computing, however, a "spike input" — a set of discrete electrical signals — is fed into the spiking neural networks (SNNs), represented by the processors. Where software-based neural networks are a collection of machine learning algorithms arranged to mimic the human brain, SNNs are a physical embodiment of how that information is transmitted. It allows for parallel processing and spike outputs are measured following calculations.

Like the brain, Hala Point and the Loihi 2 processors use these SNNs, where different nodes are connected and information is processed at different layers, similar to neurons in the brain.”

My impression is that there hadn't been much success with SNN's yet.


Spiking nets are an academic curiosity. They do not work currently, and might never work. Pretty much zero progress since Carver Mead proposed them in the late 80s. Intel Loihi is a joke, it's a useless product pushed by some exec as a pet project. Considering the mess Intel has been lately, I'm not surprised something like that got approved.

Algorithms to successfully perform any practical tasks with spiking nets do not exist yet. We have very little clue how a brain uses spikes. As a general rule, you don't want to build hardware to accelerate something that does not work in software.

Source: I worked in a lab where we have built memristor based chips to run spiking nets for image classification and other tasks (my role was developing training algorithms).


I don't really understand why they chose to simulate spiking neurons. The deep learning community uses simpler models based on ReLUs and such, so it seems that BrainChip will be missing on that market, which is currently huge. Do we have training algorithms for spiking neurons? Is this aimed mostly at biological simulations?

Yann LeCun is right.

Spikes in silicon are perhaps the most power inefficient way to represent numbers. You dissipate power every time you change a wire from high voltage to low, or back again. Think about representing numbers up to 8 bit precision. With a spiking neural network, you need to charge and discharge the wire 128 times (expected value), assuming the numbers are uniformly distributed between 0-255. With a standard 8-bit representation, you need to charge and discharge a wire 4 times.

Every time people make analogies to biological systems, they seem to overlook the truth: we don’t know how to engineer biological systems. It’s true brains are amazing, but they are not built from silicon. Trying to emulate a crude model of a brain in silicon is like making an airplane flap its wings.


There are neuromorphic deep learning algorithms. From what I read, one promise of these spiking neural networks is higher efficiency than that of typical neural nets, which would enable learning from much fewer data samples.

If anybody here works with SNNs, can you share if you think this claim is true? Also, are there any good entry points for people interested in learning more about SNNs?


Well that's kind of a non-starter attitude isn't it? We don't know how neurons work so we shouldn't try to figure it out by emulating their behaviour with electronics?

Analog neuromorphic approaches to not attempt to simulate neurons, they emulate them in silicon. Partly because of a belief that research in this area is required to produce ultra low power computational devices and partly to explore the real time dynamics of spiking neural networks.

There are very few research groups working on this, but you can look up Kwabena Boahen's group at Stanford. They do large scale real time emulation and are currently building a sub threshold neuron accelerator for the neuro engineering framework from Chris Eliasmith at Waterloo which is famously used to create SPAUN. There is also the Karlheinz Meier group at Heidelberg university which does wafer-scale networks of neurons in accelerated time. Giacomo Indiveri at ETH Zurich has silicon neurons with on-chip learning circuits that the others are missing.


Neuromorphic hardware has been usable for 10 years now. Since then, algorithms for neuromorphic hardware (i.e. spiking networks) always performed 'almost as good' as ANN solutions on GPUs (meaning: inferior). But each year a new generation of GPUs comes out, using modern processes, with excellent toolchains. In a direct comparison of power efficiency, GPUs win over NMHW most of the time.

I would love to see Spiking Networks and NMHW take over machine learning but it has such a long way to go. And I seriously doubt the strategy, followed by most players, to try to beat good old ANNs at their own game.

Unless we identify a problem set where event-based computing with spikes is the inherently natural solution, I find it hard to imagine that spiking networks will ever outcompete ANN solutions.


A million times faster and a thousand times more energy efficient means a thousand times the power usage. I find that hard to believe, so those “other typical spiking neural nets” must be quite different from “typical training systems” => as others said, this is marketing speak.

Deep learning is fundamentally linear algebra. Spiking networks are fundamentally event-based processors. The two concepts don’t play well together.

Many researchers have been trying hard to shoe-horn deep ANNs into spiking networks for the last 10 years. But this doesn’t change the fact that linear algebra is best accelerated by linear algebra accelerators (i.e. GPUs/TPUs).

Generally, spiking networks will likely have an edge when the signals they are processing are events in time. For example, when processing signal streams from event based sensors, like silicon retinas. There’s also evidence that event-based control has advantages over their periodically-sampling equivalents.


STDP is one of the early steps towards understanding the learning mechanism(s) in the brain, but we are a long way from enough understanding to actually reproduce it.

Not only we don't know how to train spiking networks, we don't even know how the information is encoded: pulse frequency or pulse timings or ...? No one knows. How can you compete with anything if you have no idea how it works?

Also, this has nothing to do with computing hardware. You can easily simulate anything your want on conventional processors. Huge computing clusters have been built for spiking model simulations, and nothing interesting came out of it. Invent the algorithm first, then we will build the hardware for it.

next

Legal | privacy