Unlike your warp drive or teleporter examples, we're pretty sure human-level AI is possible because human-level natural intelligence exists. The brain isn't magic. Eventually, people will figure out the algorithms running on it, then improve them. After that, there's nothing to stop the algorithms from improving themselves. And they can be greatly improved. Current brains are nowhere near the pinnacle of possible intelligences.
> Far from being the smartest possible biological species, we are probably better thought of as the stupidest possible biological species capable of starting a technological civilization—a niche we filled because we got there first, not because we are in any sense optimally adapted to it.
— Nick Bostrom. Superintelligence: Paths, Dangers, Strategies[1]
1) Complexity of the human brain is higher than we thought, reproducing the complexity necessary for human-level intelligence is further into the future than we initially expected.
2) Intelligence is bound by size/capabilities of a body. (ie, input & output). If you want human-level intelligence, you'll need a human-level robot too (not impossible).
That also means if you want a god-like AI, you need to have the inputs/outputs (body) of a god too.
3) Human brains are _not_ biological machines. They're biological -paradigm we have not discovered yet-. The idea of computation itself is limited by something we cannot "just" reproduce with engineering.
I doubt this, but it's not impossible that the brain employs totally physical phenomena that are basically impossible to reproduce outside of biology. Why? Because biology is something fundamentally different from engineering.
Doesn't mean we can't control biology, but still.
---
The real hypothesis is: Human-level intelligence is possible because it exists. Our best understanding of how human-level intelligence works is with the model of a "biological machine". We assume that it is possible to replicate this biological machine in the near or somewhat distant future as our understanding of engineering, biology and intelligence grows. This replication will have the same or similar intelligence-characteristics as a human.
I'd agree with what you're saying, but the jump to "not believing in superintelligence" doesn't make sense either. Are you implying that humans are the absolute peak of intelligence or that computers can't ever achieve something that is even just one iota better than human-level intelligence?
There's one thing machines have that we don't: time. A human-level AI would be better than a human because you can ask it to solve a problem for years on end without ever resting.
There is no fundamental barrier to AI cognition that we know of. But we don't have any proof that superhuman intelligence is even possible at this point. It could be algorithmically limited in ways we haven't discovered yet.
Full brain emulation is probably possible, but we don't know if it will be cost efficient compared to baseline humans.
Likewise AIs smarter than humans might be possible, but we don't know what the tradeoffs would be. Maybe they will lose coherence at larger scales. Maybe they will have high levels of goal drift making them functionally useless.
The other thing to consider is that the universe is still very very young vs our best estimates of its lifespan. We are roughly 14 Billion years into something that may very well be going for 100 Trillion years or more.
I think it's much more likely that we either fail to make human-level AI or we wildly surpass human-level AI than us getting to human-level AI, petering out exactly there, and not facing a radically AI-changed world.
If we or AI run into a barrier stopping us from improving AI, it would be surprising if that barrier was at the same exact spot where evolution ran into a barrier at improving our intelligence given all the differences between the situations. Even if it turns out in some sense that humans are running the "optimal" algorithm for intelligence already and there are no true algorithmic improvements over our intelligence available, the abilities to scale up (or network) a computer, to clone it, to serialize it, and to introspect and modify it will be huge force multipliers. If some human minds were given these abilities then they individually would have a major impact on society; if we're presuming actual human-level AI then they will too.
The only reason someone would assume that when we get to human-level AI that we'll definitely get comfortably stopped there is because they're hoping for a simple outcome that they can understand, of a world that doesn't change too much.
So far there is no reason to think that superintelligence (i.e. not just cheap, abundant general human-level intelligence) is possible at all. It has to be qualitatively superior.
I mean what superintelligence supposed to do? Solve the halting problem or do other plain impossible things? Chess computers can beat a human champion with sheer firepower, but they still can't do checkmate in one move.
Once we have AIs as smart as humans, they can do AI research as good or better than human researchers. And they can make AIs that are even better, which in turn can make even better AIs, and so on.
Dumb evolution was able to create human-level intelligence with just random mutations and natural selection. Surely human engineers can do better. But in the worst case, we could reverse engineer the human brain.
I think the issue is that once do manage to build an AI that matches human capabilities in every domain, it will be trivial to exceed human capabilities. Logic gates can switch millions of times faster than neurons can pulse. The speed of digital signal also means that artificial brains won't be size-limited by signal latency in the same way that human brains are. We will be able to scale them up, optimize the hardware, make them faster, give them more memory, perfect recall.
Nick Bolstrom keeps going on in his book about the singularity, and about how once AI can improve itself it will quickly be way beyond us. I think the truth is that the AI doesn't need to be self-improving at all to vastly exceed human capabilities. If we can build an AI as smart as we are, then we can probably build a thousand times as smart too.
Even if human intelligence was the pinnacle, AI could be still extremely dangerous just by running at accelerated simulation speed and using huge amounts of subjective time to invent faster hardware. See https://intelligence.org/files/IEM.pdf for discussion. The point is moot anyway though, since the hypothesis (that humans are the most intelligent possible) is just severely incompatible with our current understanding of science.
If you're concerned that humans are as smart as it's possible to be
It's not about humans being as smart as possible though, it's more about being "smart enough" to where a hypothetical "smarter than human AI" is not analogous to a nuclear bomb. That is, are we smart enough to where a super-AGI can't come up anything fundamentally new, that humans aren't capable of coming up with, as bound by the fundamental laws of nature.
then I would recommend reading Thinking Fast and Slow or some other book on cognitive psychology
I'm reading Thinking, Fast and Slow right now, actually.
And just to re-iterate this point: I'm not arguing for this position, just putting it out there as a thought experiment / discussion topic. I'm certainly not convinced this is true, it's just a possibility that occurred to me earlier while reading TFA.
They have explored that solution and found it much harder than creating a safe superintelligence from scratch. To use an analogy: We don't build submarines by upgrading fish, and we don't build passenger jets by upgrading birds.
The human brain is some of the worst spaghetti code imaginable. It has no API, no documentation, and no abstractions. Worse, human brains do not have stable values over time. Someone at age 60 won't value the same things they did at age 20, and this is not just due to new knowledge. That's a deal-breaker for a superintelligence. Solving these problems is likely much harder than making AI de novo.
Bostrom himself devoted a decent chunk of Superintelligence to biological enhancement. His conclusion was that genetic engineering through iterated embryo selection could get us some IQ 200+ minds. These people would have more cognitive horsepower than any mind in history, including greats like John von Neumann and John Conway, but they would not be superintelligent. Also, Bostrom concluded that somatic gene therapy and brain-computer interfaces were unlikely to help.
And as a practical matter, humans take 20+ years to reach maturity. Many forecasts put artificial superintelligence only a generation or two away, at which point all biological systems would be superseded. Biological enhancement may help to create better AI researchers, but they're not the end-game.
Most serious people know that human-level AI is possible in principle, because it is possible in humans. The only alternative would be to posit some nonsense spiritual explanation for intelligence. There are numerous predictions that someone will do it soon (not saying I agree with them, but they exist, and are occasionally made by serious people). Incremental progress has been ongoing in AI for years as well.
I can't really imagine how they could be more similar.
This is true if and only if human intelligence is anywhere close to any theoretical maximums. I propose an alternate hypothesis: human intelligence is weak and easy to exploit. The only reason it doesn't happen more (and it already happens a lot!) is that we're too stupid to do it reliably.
Consider the amount of compute needed to beat the strongest chess grandmaster that humanity has ever produced, pretty much 100% of the time: a tiny speck of silicon powered by a small battery. That is not what a species limited by cognitive scaling laws looks like.
Humanity is an existence proof for human level AI, and it seems arrogant to assume that our brains are at the limit of what is possible.
Reproductive fitness requires many attributes which obviously compete with intelligence, and evolution is a blind hill search which cannot easily escape deep local minima.
To me it's obvious that the development of superhuman AI is inevitable assuming we don't wipe ourselves out before we have time to develop it.
Here's why I disagree: smart human beings evolved from not-very-smart primates.
That proves there is a natural process by which greater intelligence can be created.
Therefore there is no reason that an even greater intelligence cannot be created with help from man. And for singularity's sake super-human intelligence doesn't even have to be an AI. Genetically-engineered super-intelligent primates would do the trick as well. The idea is that once you've created something smarter than yourself, not matter how you did it, it will then be able to figure out how to make something even smarter. And so on. That's the singularity.
^ This is really the point. Human intelligence is based on limited and filtered input, rough analog approximations in processing, and incomplete and interpolated mental representations and internal simulations, and yet nobody seriously denies that we possess some degree of intelligence. I am skeptical that current methods will get us to AGI, but the idea that machines must achieve some level of computational perfection far above and beyond humans is not reasonable.
How is it impossible in principle? We know that machines with human-level intelligence can be built. We do it all the time. We just don't know how it works yet.
> With no way to define intelligence (except just pointing to ourselves), we don't even know if it's a quantity that can be maximized. For all we know, human-level intelligence could be a tradeoff. Maybe any entity significantly smarter than a human being would be crippled by existential despair, or spend all its time in Buddha-like contemplation.
and
> But the hard takeoff scenario requires that there be a feature of the AI algorithm that can be repeatedly optimized to make the AI better at self-improvement.
This manages to get lost in its own trees. From a reductionist perspective:
- Intelligence greater than human is possible
- Intelligence is the operation of a machine; it can be reverse engineered
- Intelligence can be built
- Better intelligences will be better at improving the state of the art in building better, more cost-effective intelligences
Intelligence explosion on some timescale will result the moment you can emulate a human brain, coupled with a continued increase in processing power per unit cost. Massive parallelism to start with, followed by some process to produce smarter intelligences.
All arguments against this sound somewhat silly, as they have to refute one of the hard to refute points above. Do we live in a universe in which, somehow, we can't emulate humans in silico, or we can't advance any further towards the limits of computation, or N intelligences of capacity X when it comes to building better intelligences cannot reverse engineer and tinker themselves to build an intelligence of capacity X+1 at building better intelligences? All of these seem pretty unlikely on the face of it.
We're really not that far. I'd argue superintelligence has already been achieved, and it's perfectly and knowably safe.
Consider, GPT-4o or Claude are:
• Way faster thinkers, readers, writers and computer operators than humans are
• Way better educated
• Way better at drawing/painting
... and yet, appear to be perfectly safe because they lack agency. There's just no evidence at all that they're dangerous.
Why isn't this an example of safe superintelligence? Why do people insist on defining intelligence in only one rather vague dimension (being able to make cunning plans).
> Far from being the smartest possible biological species, we are probably better thought of as the stupidest possible biological species capable of starting a technological civilization—a niche we filled because we got there first, not because we are in any sense optimally adapted to it.
— Nick Bostrom. Superintelligence: Paths, Dangers, Strategies[1]
1. http://www.amazon.com/Superintelligence-Dangers-Strategies-N...
reply