I took a computational neuroscience course many years ago and discovered just how primitive our understanding of neural circuitry is. Case in point: at the time (don't know how it is now) it was not possible to precisely model more than a handful of neurons in time domain. This is done with differential equations, but the systems get extremely complicated very quickly and you can't realistically expect to solve them and get something predictive out. One cell is quite easy. 2 is already getting harder and 3 and beyond are crazy because everything is affected by everything. From that I concluded that we simply don't have the right math to express this. If there have been any developments on the math side since then I'd be interested in hearing about it.
This is likely why ANNs effectively model neurons in frequency domain (where magnitude of the signal is proportional to the strength of the spike train). But they don't do the same thing, and the mechanism the real neurons use to learn patterns is completely different - it relies (in part) on timing of signal arrival, and the processes are continuous, there aren't any "forward" or "backward steps". Nor are there any gradients. And there are other timing effects that have to do with chemistry, on top of all that.
I have always wondered about this question i.e. is it always necessary to model our systems "exactly" on the natural physiology? Case in point, men, inspired from the flights of the bird in nature first tried to build the flying machines using "flaps". Soon, though, it became apparent that flight is best achieved via a circular motion instead of the flaps. Similar arguments for wheels and automobiles that roll instead of emulating a walk.
What I'm saying is, maybe the neurons and the human intelligence are indeed order of magnitudes more complex than our current research. But maybe, our simple designs will prove more efficient and equally effective in the long run? To be clear, this is of course not to imply that we have nothing to gain from a deeper understanding of the human physiology. It's just a curious question of mine.
> is it always necessary to model our systems "exactly" on the natural physiology
To replicate their effects? Probably not. "Probably" because it could very well be that evolution came up with something that anything more straightforward simply can't replicate, or maybe can replicate but not economically.
But to understand what already exists, I think yes, you do need to be able to model it. Otherwise the speed with which you acquire your understanding becomes glacial because all of your means of acquiring new knowledge are slow, low in scale, and destructive.
What I like about the flying analogy is that we are not optimising for the same scenarios. You have birds flying 10 000 - 20 000 miles without having to go to external shops for service, needing replacement parts and being very energy efficient. Our simpler models optimize for specific scenarios like specific routes with this much fuel and optimizing for specific total cost. I guess your example with neural networks would be AI classifying pictures better than humans which is very specific and I guess achievable with our primitive models.
This is an excellent response! I’d also add that the things we can optimize for are very limited—for both flight and AI—by the current state of materials science.
Bird wing mechanics are incredibly complex, and we’re still learning how far that extends. This video[0] shows how the feathers are spread apart and turned to decrease drag on the upstroke, something that we probably couldn’t see before slow-motion video was invented. Even though we know now, we cannot (to my knowledge) create artificial feathers with the same properties as real ones, nor can we build the biomimicry required to make use of them. (Plus, nobody wants to be in a bouncing bird-wing body even for a short flight.)
There are a whole lot of problems where if you/the system, dont have time, resources, information etc reaching an optima isnt possible. With sucb problems striving for efficiency turns out to be a negative. See the explore-exploit trade off for example.
So finding simple efficient solutions are never enough. We have to study other mechanisms. Especially the brain.
When you dont have an explanation, people make up stories. And the made up stories effect everything.
Here is Richard Gariott, game designer, explaining how quickly it happens -
"if we put a fishing pole there, it had to do something, so we made fish for players to catch. This wasn’t meant to be a key feature of the game, and we didn’t spend a lot of time on it, stocking the water with generic-looking fish. We gave fishermen a simple fifty-fifty chance of catching a fish every time they put their line in the water. It didn’t matter if they were fishing in a river or in the ocean, there was a fifty-fifty chance. It didn’t matter if it was morning, noon, or night, still fifty-fifty. But to our surprise, when we launched the game, fishing immediately became very popular. Fifty-fifty doesn’t mean the result will alternate each time, it means that over a span of time the result will be even. As a consequence players began to speculate that there might be some other rule behind fishing, believing that they had better results when fishing two or three yards farther offshore than when they cast close to the riverbank. Fishing at night, some people were convinced, was more productive than fishing in the afternoon. People created their own mythology; they believed there were special fishing holes, and wouldn’t tell anyone else where they were. None of that was true, but players believed it"
Samething happens with people and the stories they tell about their brains and the thoughts that come out of it. See Xi Jinping for example. What his brain is doing effects billions of people.
But tomorrow someone might stumble upon rules underlying it all and that will have huge effects on existing stories we tell ourselves.
We do have the right math. It’s just that no one is using it. The planet is focused on differential and metric invariants (differential algebraic equations and statistics) and should be leveraging topological approaches to analyzing dynamical systems and networks. Almost no one is doing the latter yet there is already enough research to start applying the theory.
The classic is Eugene M. Izhikevich, Dynamical systems in neuroscience: the geometry of excitability and bursting, 1967. A lot more work by Bard Ermentrout. Topological methods are pretty well established in computational neuroscience, but there is still little overlap with the deep neural nets that are now commonplace.
You might have the wrong date. I have an edition of Dynamical Systems in Neuroscience from 2007 and, having met the author, I'd imagine he might have just been born in 1967.
I would love to have a book about brain that separates the abstract guiding principles from the random details of implementation. Every book I have tried quickly devolved into "calcium pumps" and similar stuff but I feel these are just one specific evolutionary implementation of some "principle" on higher level of abstraction. Sure, it is important to understand the details if you want to be an expert. But as a curious layman I would so much enjoy a book that explains (even incompletely, based on our current knowledge) how the brain does thinking, perception, learning... in terms of networking. In terms of strengthening connection between units that fired together and changing connectivity parameters of the whole (sub)network, but without delving into chemistry of synaptic transmission and hormones. The general principles that human brains and artificial neural networks have in common. If anyone knows about such a book, please let me know.
The reason why we don't have such a book is simply because nobody has the slightest clue as to how things work at the macro level. We got as far as figuring out a few dozen cells at a time, but beyond that it's really anyone's guess. We don't even know what the _questions_ should be in this field, let alone the answers. I suspect it's also strongly discouraging to newcomers because things just start looking very obviously intractable very early on, modeling of large numbers of neurons sufficiently well is mathematically near impossible, readout is mostly destructive to the specimen (and in ghastly ways - the specimen has to be alive and awake for it to even be possible), and completely impossible in humans, though Neuralink might provide a much needed boost there. Speaking of Neuralink, I think this un-developedness of the field is why Musk is pursuing it. This is, in many ways, the _real_ last frontier - understanding ourselves. There won't really be much to understand beyond that, since it is but a small step between full understanding and the ability to replicate, and possible eclipse what we have.
we even do not know how single cell work. if people find text book focus on some molecular mechanisms, also we can presume exist other similar mechanisms, but the real reason is that this poorly knowledge is the only thing we understand.
This isn’t really true. It’s completely feasible to simulate large numbers of biologically plausible neurons these days at varying levels of accuracy. Brian 2 is one widely used package for that.
Not just "plausible". I'm talking about near exact, not just "plausible" simulation of the signals they will consume and produce, in time domain, at an arbitrary time resolution. There's no "package" for that.
You can keep adding compartments and solving Hodgkin-Huxley at small time-steps to get basically arbitrary resolution. It definitely gets a lot of expensive than simpler models (and there’s some evidence that simpler models will converge to the same result).
The issue is that we don’t know if our neuron models are complete. Massive simulations are the easy part—computers are very, very fast and large these days.
They are also "discrete", which kind of gets in the way of modeling continuous processes in this case. And not very good about modeling trillions of things at the same time due to communication overhead (both latency _and_ energy). And all of that is the "easy" part. The hard part is we don't even know what the system needs to look like.
Yann LeCun said decades ago that in spite of all the progress there won't be anything artificial that's as smart as a rat in his lifetime. In 2021 that bet is still a safe one.
The continuous vs. discrete thing is largely moot in my opinion. The low-significant bits of any model of a physical process are going to be noise anyhow. You can model more accurately if there's some data that's missing, but the fact that the real world is continuous (ignoring that it's really not) doesn't really matter.
We can model many billions of things at once. Take a look at the spinnaker system. Sure, we're not at the level of simulating 100 billion biologically plausible neurons yet, but we can do a few billion and the techniques are likely to scale well.
One of the attempts to bring ANNs closer to biology is Spiking Neural Networks, where they introduce the timing component, but so far they're harder and less efficient to train than the non-spiking networks but can have the similar accuracy.
Note that compared to real neurons this is very, very crude, and it does not replicate their functionality in full. The key thing to understand is that a real neuron is not a constant thing, it is subject to its environment, meaning that if it receives a lot of spikes, for example, it may become temporarily depleted, and that very depletion will play a role in how the bigger system works. It also does not have "time steps" - it processes the signal continuously. In general, as in any biological system, everything depends on everything else, often in circuitous, ambiguous, non-obvious ways, which makes things extremely difficult to model and/or understand.
What this means from the research perspective: it's like that joke with a drunk searching for car keys under a streetlamp because it's easier to search there. People make progress where they can until someone unusual comes along and installs a new "streetlamp" elsewhere. 100% of research and engineering (and life in general) works this way, not just neuroscience. It's just that at the moment the street lamps in this field are few, dim, and very far between.
Strange title for an article full of scientific discoveries.
I'm certain most lay persons have a good intuitions of how complex a brain is, yet most would be surprised to learn, while reading this very article for instance, how much we already know about the brain, despite it being so complex.
Sure, mapping neurons and/or simulating them looks desperately hopeless. But not any more than mapping matter in space and simulating particles (remember the 3 body problem?), yet we managed to understand physics and space quite well despite this.
The 3 body problem is not a problem for numerical approaches at all, well it's not more difficult than the 2 body problem. The difficulty is finding a closed form analytical solution.
Simulating even a single transistor used in modern chips is extremely complex if you care about physical effects involved in its operation. Yet a transistor is just a simple on/off switch in the context of computer architecture.
A really cool article summarizing decades of amazing research by some very smart and determined minds.
Would it be possible to walk back the evolutionary chain of events that leads to a specific brain structure?
For example, one of the scientists says that the connectomes mapping might be in the order of peta bytes of peta bytes of data.
But they originated from a seemingly 'simple' millions or billions of pairs of DNA. Would it make sense to trace back from a system design or a theoretical limits perspective? The chain of chemical reactions from DNA to these neurons and so forth.. the combinatorics from a DNA perspective while very complex seems manageable compared to the mapping of neurons after they have evolved into a full 'creature'.
I would guess that such a project is at least one hundred years away, based on its difficulty and the progress we have made in the last one hundred years (assuming a similar rate of progress, and that is not a sure thing).
It is hard to describe how difficult this would be.
Fractal patterns generated from small sets of rules can be massively complex, in many cases the final structure bears little resemblence to the inputs.
The take that "since we can't completely understand how a system of a few hundred neurons works, we can hardly expect to figure out a brain of 86 billion neurons" is a little irritating.
1. A similar argument might run: we can hardly understand all the complex interactions that are going on at the molecular level at the seashore, so how can we possibly model beach erosion. But the truth is that we don't always have to model everything to begin to understand how a system works, at least under a wide range of situations.
2. But also, it seemingly ignores that a lot of those 86 billion neurons have the same functional roles. On the one hand, its one way to build redundancy in the system--probably important for a creature living more than half a century and intimately involved in long-term care of offspring, but more neurons can offer more precision (e.g. a large population of neurons with varying tuning curves can you give you more precise estimates of values).
I want to throw a question out there because I haven't really ever done any neural net modeling of any sort, but I did design a CPU in grad school, and one of the things that really had to be carefully handled is hazards, e.g. the presence of "incorrect" intermediate values in combinatorial circuits. This is a kind of circuit coordination problem, and in CPU design this is handled by minimizing circuit paths and the use of a clock so that only "final" circuit outputs are used.
Surely there must be some similar sorts of hazards in the brain. How do brains solve the coordination problem?
This is likely why ANNs effectively model neurons in frequency domain (where magnitude of the signal is proportional to the strength of the spike train). But they don't do the same thing, and the mechanism the real neurons use to learn patterns is completely different - it relies (in part) on timing of signal arrival, and the processes are continuous, there aren't any "forward" or "backward steps". Nor are there any gradients. And there are other timing effects that have to do with chemistry, on top of all that.
reply