Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

If you're concerned that humans are as smart as it's possible to be

It's not about humans being as smart as possible though, it's more about being "smart enough" to where a hypothetical "smarter than human AI" is not analogous to a nuclear bomb. That is, are we smart enough to where a super-AGI can't come up anything fundamentally new, that humans aren't capable of coming up with, as bound by the fundamental laws of nature.

then I would recommend reading Thinking Fast and Slow or some other book on cognitive psychology

I'm reading Thinking, Fast and Slow right now, actually.

And just to re-iterate this point: I'm not arguing for this position, just putting it out there as a thought experiment / discussion topic. I'm certainly not convinced this is true, it's just a possibility that occurred to me earlier while reading TFA.



sort by: page size:

When I say I'm "thinking out loud" what I mean is, the exact words I used may not reflect the underlying point I was getting at, because it was fuzzy in my head when I first started thinking about it. Reading all of these responses, it's clear that most people are responding to something different than the issue I really meant to raise. Fair enough, that's my fault for not being clearer. But that's the value in a discussion, so this whole exercise has been productive (for me at least).

These are basic questions, the answers are known, published, and the article even mentions exactly where you can find them.

I've read and re-read TFA and I don't find that it addresses the issue I'm thinking about. It's not so much asking "are we the smartest possible creature", or even asking if we're close to that. It's also not about asking whether or not it's possible for a super-AGI to be smarter than humans.

The issue I was trying to raise is more of "given how smart humans are (whatever that mean) and given whatever the limit is for how smart a hypothetical super-AGI can be, does the analogy between a super-AGI and a nuclear bomb hold? That is, does a super-AGI really represent an existential threat?"

And again, I'm not taking a side on this either way. I honestly haven't spent enough time thinking about it. I will say this though... what I've read on the topic so far (and I haven't yet gotten to Bostrom's book, to be fair) doesn't convince me that this is a settled question. Maybe after I finish Superintelligence I'll feel differently though. I have it on my shelf waiting to be read anyway, so maybe I'll bump it up the priority list a bit and read it over the holiday.


So, here's a random thought on this whole subject of "AI risk".

Bostrom, Yudkowsky, etc. posit that an "artificial super-intelligence" will be many times smarter than humans, and will represent a threat somewhat analogous to an atomic weapon. BUT... consider that the phrase "many times smarter than humans" may not even mean anything. Of course we don't know one way or the other, but it seems to me that it's possible that we're already roughly as intelligent as it's possible to be. Or close enough that being "smarter than human" does not represent anything analogous to an atomic bomb.

So this might be an interesting topic for research, or at least for the philosophers: "What's the limit of how 'smart' it's possible to be"? It may be that there's no possible way to determine that (you don't know what you don't know and all that) but if there is, it might be enlightening.


Even if human intelligence was the pinnacle, AI could be still extremely dangerous just by running at accelerated simulation speed and using huge amounts of subjective time to invent faster hardware. See https://intelligence.org/files/IEM.pdf for discussion. The point is moot anyway though, since the hypothesis (that humans are the most intelligent possible) is just severely incompatible with our current understanding of science.

>general intelligence "smarter" than humans

I read a book recently called "Are We Smart Enough to Know How Smart Animals Are?"[1] Very fascinating thought that, in my mind, redefines what I think of as "intelligence". The book Superintelligence[2] gets a little bit into this as well.

Basically, if we define "smarter than humans" as "able to do everything humans do, in the manner humans do them, but faster", then we have no idea what intelligence actually means. AI will be another species (not just a better human) and its intelligence would have to be judged based on its own merits.

Perfectly modeling the human brain in computer form is one avenue that computer science is working on. Artificial intelligence is a completely different path, and does not have to model human intelligence in any way.

[1] https://smile.amazon.com/Are-Smart-Enough-Know-Animals/dp/03...

[2] https://smile.amazon.com/Superintelligence-Dangers-Strategie...


It may not be a question of 'can't' or 'can', it may be a question of how fast.

Someone that can solve complicated puzzles quickly is perceived as smarter than someone who can solve those very same puzzles but slower.

Even though to all practical intents and purposes they would be equally intelligent.

All the tests by which we measure ourselves have a time limit associated with them. Society is quite biased against 'slow' people, even though they may be just as smart as the rest.

I'm skeptical about claims that we will be able to engineer an intelligence that is 'smarter' than we are, but I'm open to the possibility that we can engineer one that is as intelligent as we are but simply much faster.

I don't expect that to happen any time soon though (soon as in the next 50 to 200 years or so), and maybe we won't be able to do this at all.

But one small fact about our ability to make tools to make things that we can not make is visible in the semiconductor industry, where we have made computers that make the next generation chips, and so on. If you wanted to re-start today from 1970's technology with all the knowledge that we have about making chips it would still take quite a bit of time to get us back to the present state of the art.

Maybe something similar holds for 'intelligence', that once you achieve a certain base level of intelligence that runs faster than our own and so can try many more avenues than we are capable of that might yield results faster and lead to incremental improvements in the functioning of this intelligence over time.

One big factor here could be that an AI could draw on a 'perfect memory', once something is stored it would accessible for ever at very high speeds. That alone would give it a tremendous edge.


If smart, intelligent humans are the ones that's going to make artificial superintelligence, wouldn't that make us more intelligent than them? Unless AI can create a more intelligent versions of themselves on its own, I have second thoughts believing this idea.

I think most people didn't really understand the meaning of your comment. They seem to all equate intelligence and processing speed.

I think it's legitimately an interesting question. As in, it could be something like Turing completeness. All Turing complete languages are capable of computing the same things, some are just faster. Maybe there's nothing beyond our level of understanding, just a more accelerated and accurate version of it. An AI will think on the same level as us, just faster. In that case, in that hypothetical, an AI 100x faster than a person is not much better than 100 people. It won't forget things (that's an assumption, actually), it's neuron firing or equivalent would be faster, but maybe it won't really be capable of anything fundamentally different than people.

This is not the same as the difference between chimps and humans. We are fundamentally on another level. A chimp, or even a million chimps, can never accomplish what a person can. They will not discover abstract math, write a book, speak a language.

Mind you, I suspect this is not the case. I suspect that a super intelligent AI will be able to think of things we can never hope to accomplish.

But it is an interesting question that I think is worth thinking about, rather than inanely down voting the idea.


Two points:

* We have no idea of how to measure intelligence, and we have no way of deciding whether thing X is more or less intelligent than thing Y. (The article makes this point.) Therefore, superintelligence is perhaps a bogus concept. I know it seems implausible, but perhaps there can never be something significantly more intelligent than us.

* Nevertheless, mere human-like intelligence, if made faster, smaller and with better energy-efficiency, could cause the same runaway scenario that people worry about: imagine a device that can simulate N humans at a speed M times faster than real time, and those humans designed and built the device and can improve its design and manufacture.

In general, I see a lot of merit in the arguments on both sides of this discussion.


Unlike your warp drive or teleporter examples, we're pretty sure human-level AI is possible because human-level natural intelligence exists. The brain isn't magic. Eventually, people will figure out the algorithms running on it, then improve them. After that, there's nothing to stop the algorithms from improving themselves. And they can be greatly improved. Current brains are nowhere near the pinnacle of possible intelligences.

> Far from being the smartest possible biological species, we are probably better thought of as the stupidest possible biological species capable of starting a technological civilization—a niche we filled because we got there first, not because we are in any sense optimally adapted to it.

— Nick Bostrom. Superintelligence: Paths, Dangers, Strategies[1]

1. http://www.amazon.com/Superintelligence-Dangers-Strategies-N...


Have you watched this video? https://www.youtube.com/watch?v=xoVJKj8lcNQ

(You don't have to, of course. An hour-long video recommended by a stranger on the internet is a big ask)

There's a ton of evidence that this is moving way faster than people think. I don't think there's anything particularly special about human intelligence. And time and time again, we've been able to copy things that nature does and make them way more powerful, like planes vs birds. It does the same thing, but in a completely different way. It would be surprising if we couldn't do it with brains.

I feel like a lot of people are not going to admit that we have already created AGI until we literally have an unstoppable AI that has taken over everything. It's not human level everywhere, but it's way better than humans in a lot of ways. It's never going to be exactly human, it'll work differently than we do. But if you count intelligence as the ability to achieve goals, which is how most researchers do, then we're already pretty much there.


Yes, that's what I'm saying. If you can make an AI that's roughly human IQ (even like 80 IQ), but thinks 100x faster than a human, then that's something very much like, if not identical to, "a superhuman AI."

So when you say, "Here's how we'll get superhuman AI: We'll network together bunches of 80 IQ AI's that think 100x faster than a human," it's kind of assuming its own solution.


That seems like a bit of a stretch.

Who's to say strong AI will even be significantly smarter than us anyway? What if humans are almost as smart (obviously there's some room for improvement) as it's physically possible to be? Truth be told we really don't know that much about consciousness or intelligence.


Atom bombs are just tools, they are only controlled by humans. But a super intelligence can be an actor, not just a tool. This changes everything. It might be vastly more intelligent than humans. It wouldn't be limited by skull size, meager energy provided by food, or low spike frequency of biological neurons. The difference might be as large as between humans and chimpanzees, or humans and mice, or even humans and ants. There is no reason to expect we are anywhere near the theoretical peak of intelligence.

I lean toward the view that for information theoretic reasons the availability of meaningful information (training data) is likely the fundamental constraint on any rapid explosion of intelligence.

That being said I don’t think you need a god-like superintelligence to be more intelligent than humans. You just need something marginally better that can remain focused longer and doesn’t tire. As to whether that represents a danger to humans I think it depends on what we do with it and/or what kind of society or environment we embed it within. If we train or prime it to compete and dominate that’s what it will do. Same as with humans who are more criminal and violent when raised in unstable or abusive homes.


Yeah. Maciej wrote a pretty good piece rebutting AI alarmism and kind of alludes to that as one of several points.

http://idlewords.com/talks/superintelligence.htm

> With no way to define intelligence (except just pointing to ourselves), we don't even know if it's a quantity that can be maximized. For all we know, human-level intelligence could be a tradeoff. Maybe any entity significantly smarter than a human being would be crippled by existential despair, or spend all its time in Buddha-like contemplation.

and

> But the hard takeoff scenario requires that there be a feature of the AI algorithm that can be repeatedly optimized to make the AI better at self-improvement.

And so on. It's a good read.


I understand the concept of creating something exceedingly more generally intelligent than its creator, I'm simply suggesting it's not possible. Many people assume that it is, and we'll have to agree to disagree. But even if I'm wrong and it does become possible, think about how unlikely it would be for a human to accidentally accomplish this.

Also, if AI is to be smarter than humans, it will know it could potentially be wrong about anything. Armed with that knowledge, how much smarter can it really be?


If you want to argue that it's logically impossible for a smarter-than-human intelligence to be created with values aligned with human ones, then cool, do that. (Note that a smart human on much faster hardware is smarter-than-human. This is not a claim that AGIs must be mere faster humans; it's one example to point out a difficulty for any such argument.)

But I think you gave a false impression of others at the start of this thread.


Here's why I disagree: smart human beings evolved from not-very-smart primates.

That proves there is a natural process by which greater intelligence can be created.

Therefore there is no reason that an even greater intelligence cannot be created with help from man. And for singularity's sake super-human intelligence doesn't even have to be an AI. Genetically-engineered super-intelligent primates would do the trick as well. The idea is that once you've created something smarter than yourself, not matter how you did it, it will then be able to figure out how to make something even smarter. And so on. That's the singularity.


A good point, and I should have been clearer about that.

I do think it's reasonable to talk about intelligence 'growing', and consequently about one intelligence 'surpassing' another. But AI's methods of thinking certainly won't be human, and it may reach human-parity on different metrics at very different times. Hell, we're seeing some of that already: AI can do I/O and data processing at superhuman speeds, but humans can still extract much more knowledge from a small amount of data.

next

Legal | privacy