Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

> There are also geniuses who can do amazing feats of mental arithmetic and have no severe mental disabilities in any other areas.

And do they seem to hold any sort of significant power? It seems that intelligence is not a very good means for achieving power. Charm seems much more effective, and therefore dangerous. I'd be afraid of super-human charm much more than super-human intelligence.

Mental disabilities aside, very high IQ seems to be correlated with relatively low charm and low ability to solve particular problems (how to get people to do what you want) that are far more dangerous than the kind of problem-solving intelligence (or how we commonly define it) is capable of solving. "Super" intelligent people are terrible problem-solvers when the problems involve other humans.

Ironically, the fact that people like me view the AI-scare as a religious apocalypse that is as threatening as any other religious apocalypse implies one of two things: 1) that the people promoting the reality of this apocalypse are not as intelligent as they believe themselves to be (a real possibility given their limited understanding of both intelligence and our real achievements in the field of AI) and/or that 2) intelligent people are terrible at convincing others, and so don't pose much of a risk.

Either possiblity shows that super-human AI is a non-issue, certainly not at this point in time. As someone said (I don't remember who), we might as well worry about over-population on mars.

What's worse is that machine learning poses other, much more serious and much more imminent threats than super-human intelligence, such as learned biases, which are just one example of conservative feedback-loops (the more we rely on data and shape our actions accordingly, the more the present dynamics reflected in the data ensure that they don't change).



sort by: page size:

> which means something along the lines of "ability to accomplish goals," with the social construct of "intelligence"

Perhaps. But if that is the case, the people who are most intelligent by this definition are far from the ones recognized as intelligent by the AI-fearing community. Let me put it this way: Albert Einstein and Richard Feynman would not be among them. Adolf Hitler, on the other hand, would be a genius.

> You're wrong, and super-human AI is a massive issue, regardless of how you pattern-match it as "religious"

If you think I don't always presume that everything I say is likely wrong then you misunderstand me. I do, however, don't understand what you mean by "a massive issue". Do you mean imminent danger? Yes, I guess it's possible, but being familiar with the state of the art, I can at least discount the "imminent" part.

> You're not creative enough (one might dare say "intelligent" but that would be too snarky) to imagine all the ways in which an AI could devastate humanity without it having much intelligence or much charm

I can imagine many things. I can even imagine an alien race destroying our civilization tomorrow. What I fail to see is compelling arguments why AI is any more dangerous or any more imminent than hundreds of bigger, more imminent threats.

> In fact, most hyper-successful people are almost certainly good at both.

I would gladly debate this issue if I believed you genuinely believed that. If you had a list ordered by social power of the top 100 most powerful people in the world, I doubt you would say their defining quality is intelligence.

> it's very possible to apply the scientific method and empirical problem solving to finding them, and then exploiting humans that way. This is a huge subfield of psychology (persuasion) and the basis of marketing.

Psychology is one of the fields I know most about, and I can tell you that the people most adept at exploiting others are not the ones you would call super-intelligent. You wouldn't say they are of average intelligence, but I don't think you'd recognize their intelligence as being superior.

> It's nice to think the world is a safe place, but the reality is that our social order is increasingly precarious and an AI could easily disrupt that.

There are so many things that could disrupt that, and while AI is one of them, it is not among the top ten.


>Lots of times I suspect the fears of superintelligence have more to do with the experimental nature of how AI was developed than anything else

Then I would say your expectations are a failure of your own imagination.

The problem with humans being the smartest things on the block is we think we are optimized for intelligence. One problem with wanting to clarify what 'super intelligence' is, is we really have piss poor definitions of what intelligence is on a multidimensional gradient scale. The bounds and limits of intelligence are poorly defined, but I scenerly ask, why would you think that humanity would be anywhere near the upper bound of those limitations? Why would evolution choose 'the smartest possible global maxima' versus 'the smartest possible with 2 pounds of jelly in a 20 watt power envelope?'


> I think this is the easiest one to knock down. It's very, very attractive to intelligent people who define themselves as intelligent to believe that intelligence is a superpower, and that if you get more of it you eventually turn into Professor Xavier and gain the power to reshape the world with your mind alone.

Also it's not like human intelligence even works that way. IIRC, a lot of extremely intelligent people end up being failures or far less successful than you'd assume given their IQ number.

> The only realistic path to a global AI threat is a subset of the "nuclear war" human to human threat: by taking over (or being given control of) weapon systems.

That may be the only realistic prompt catastrophe threat, but a realistic longer term one is a withering of human capability and control due to over-delegation that eventually leads to domination. People and societies have been pretty prone to letting that kind of thing get them, if the path is lined with short-term benefits.


> Without superhuman intelligence, AI is no large threat to human civilization, exposed to dangerous concepts or not.

The problem is partly that average humans are dangerous and we already know that machines have some superhuman abilities, eg super human arithmetic and the ability to focus on a task. It's like that AI will still have some of those abilities.

So an average human mind with the ability to dedicate itself to a task and genius level ability to do calculations is already really dangerous. It's possible that this state of AI is actually more dangerous than superhuman ones.


>An AI could copy the human brain to achieve human-level AI, but beyond that there are no roads so our hypothetical super-mind is confronted by bignum^bignum combinatorial search problems.

If you can copy human minds into a machine-world, then you can copy the most intelligent human minds. The most intelligent humans are pretty darn smart, and with no obligations, a surfeit of time, and effective immortality, they may make short work of your claims that intelligence can't be effectively bootstrapped.

It's difficult to imagine the inner lives of people much smart than we are. So neither of us can imagine a world occupied by persons belonging to a 300 IQ race of supermen, but there are presently 200 IQ persons, if only as genetic anomalies, and technology already exists sufficient to replicate them an almost arbitrary number of times.

The future's so bright I gotta wear shades.


>Perhaps. But if that is the case, the people who are most intelligent by this definition are far from the ones recognized as intelligent by the AI-fearing community. Let me put it this way: Albert Einstein and Richard Feynman would not be among them. Adolf Hitler, on the other hand, would be a genius.

How so? Feynman in particular was quite able to continually accomplish his goals, and he purposely chose divergent goals to test himself (his whole "I'll be a biologist this summer" thing).

And yes, see my original comment re: it takes intelligence to walk into the DAP meeting and join as member 55 and come out conquering mainland Europe.

>I do, however, don't understand what you mean by "a massive issue". Do you mean imminent danger? Yes, I guess it's possible, but being familiar with the state of the art, I can at least discount the "imminent" part.

The state of the art is irrelevant here; in particular, most of AI seems to be moving in the direction of "use computers to emulate human neural hardware and use massive amounts of training data to compensate for the relative sparseness of the artificial neural networks."

What's imminently dangerous about AI is that all it really takes is a few innovations that might be in seemingly unrelated areas enable probably several people who see the pattern to go and implement AI. This is how most innovation happens, but here it could be very dangerous, because...

>What I fail to see is compelling arguments why AI is any more dangerous or any more imminent than hundreds of bigger, more imminent threats.

AI could totally destabilize our society in a matter of hours. Our infrastructure is barely secure against human attackers, and it could be totally obliterated by an AI that chose to do that, or incidentally caused it to happen. An AI might not be able to launch nukes directly (in the US at least, who knows what the Russians have hooked up to computers), but it could almost certainly make it seem to any nuclear power that another nuclear power had launched a nuclear attack. There actually are places that will just make molecules you send them, so if the AI figures out protein folding, it could wipe out humanity with a virus.

AI is more dangerous than most things, because it has:

* limitless capability for action

* near instantaneous ability to act

The second one is really key; there's nearly nothing that would make shit hit the fan FASTER than a hostile AI.

If you have a list of hundreds of bigger, more imminent threats, that can take humanity from 2015 to 20000BCE in a day, I'd like to see it.

>I doubt you would say their defining quality is intelligence.

I'm confused as to how you can read three comments of "intelligence is the ability to accomplish goals" and then say "people who have chosen to become politically powerful and accomplished that goal must not be people you consider intelligent."

>You wouldn't say they are of average intelligence, but I don't think you'd recognize their intelligence as being superior.

Well, they can exploit people. How's that for superiority?

My background is admittedly in cognitive psychology, not clinical, but I do see your point here. I'd like to make two distinctions:

* A generally intelligent person (say, Feynman) could learn to manipulate people and would almost certainly be successful at it

* People that are most adept at manipulating people, usually are that way because that's the main skill they've trained themselves for over the course of their lives.

>it is not among the top ten.

Of the top ten, what would take less than a week to totally destroy our current civilization?


> It's odd that all of the very smart people of the world feel AI is a threat to humanity.

Only if you define one of the conditions for being a "very smart person" to be feeling that AI is a threat to humanity.


> And after you explain that, explain how having something vaguely in common with religion automatically means it's wrong.

No one is saying it's wrong, only that the discussion isn't scientific.

It is not only unscientific and quasi-religious; there are strong psychological forces at play, that muddy the waters further. There are so many potentially catastrophic threats that the addition of "intelligence" to any of them seems totally superfluous. Numbers are so much more dangerous than intelligence: the Nazis are more dangerous than Einstein; a billion zombies obliterate humanity; a trillion superbugs are not much more dangerous if they are intelligent or even super-intelligent; we intelligent humans are very successful for a mammal, but we're far from being the most successful (by any measure) species on Earth.

This fixation on intelligence seems very much like a power fantasy of intelligent people who really want to believe that super-intelligence implies super-power. Maybe it does, but there are things more powerful -- and more dangerous -- than intelligence. This power fantasy also helps cast a strong sense of irrational bias over the discussion. This power fantasy is palpable and easily observed when you read internet forums discussing the dangers of AI. This strong psychological bias tends to distract us from less-intelligent, though possibly more dangerous, threats. It is perhaps ironic, yet very predictable, that the people currently discussing the subject with the greatest fervor are the least qualified to do so objectively. It is not much different from poor Christians discussing how the meek are the ones who shall inherit the earth. It is no coincidence that people believe that in the future, power will be in the hands of forces resembling them; those of us who have studied the history of religions can therefore easily identify the same phenomenon in the AI-scare.


>It seems that intelligence is not a very good means for achieving power. Charm seems much more effective, and therefore dangerous. I'd be afraid of super-human charm much more than super-human intelligence.

See these paragraphs in the post to which you replied:

>Walking into a meeting of the Deutsche Arbeiterpartei, joining as the 55th member, and later seizing control of the state requires intelligence. Landing in a dingy yacht with 81 men, bleeding a regime to death, and ruling it until your death requires intelligence. Buying the rights to the Quick and Dirty Operating System, licensing it to IBM, and becoming the de facto standard OS for all consumer computing requires intelligence. Presenting yourself as a folksy Texan when you're a private-school elite from Connecticut and convincing the electorate that you'd have a beer with them requires intelligence. All of these outcomes are goals, and accomplishing them demonstrates an ability to actually put the rubber to the road.

>You're confusing the technical term of intelligence, which means something along the lines of "ability to accomplish goals," with the social construct of "intelligence" which means something along the lines of "ability to impress people." Intelligence is not always impressive, and humans have a fantastic ability to write off the accomplishments of others when they want to make the world appear more just. I mean, nobody feels good about the fact that we're one Napoleon away from a totally different world system that may or may not suit our interests. The belief that the world is somehow fundamentally different to the extent that the next Napoleon-class intelligence who isn't content managing a hedge fund and just increasing an int in a bank database can't actually redraw the world map, is just an illusion we tell ourselves to make the world we live in more fair, the stories we live out more meaningful, and the events of our lives have a little bit more importance.

>Ironically, the fact that people like me view the AI-scare as a religious apocalypse that is as threatening as any other religious apocalypse implies one of two things:

Some alternative explanations:

3. You're wrong, and super-human AI is a massive issue, regardless of how you pattern-match it as "religious"

4. You're not creative enough (one might dare say "intelligent" but that would be too snarky) to imagine all the ways in which an AI could devastate humanity without it having much intelligence or much charm

5. Your view of the world is factually incorrect, I mean you believe things like:

>Mental disabilities aside, very high IQ seems to be correlated with relatively low charm and low ability to solve particular problems (how to get people to do what you want) that are far more dangerous than the kind of problem-solving intelligence (or how we commonly define it) is capable of solving. "Super" intelligent people are terrible problem-solvers when the problems involve other humans.

Let's assume that IQ is a good proxy for intelligence (it isn't): what IQ do you think Bill Gates or Napoleon or Warren Buffet or Karl Rove have? What IQ do you think Steve Jobs or Steve Ballmer had/have? Do you think they're just "average" or just not "very high"?

This:

>very high IQ seems to be correlated with relatively low charm

is again the just world fallacy! There is no law of the universe that makes people very good at abstract problem solving bad at social situations. In fact, most hyper-successful people are almost certainly good at both.

And that ignores the fact that cognitive biases DO exist, and it's very possible to apply the scientific method and empirical problem solving to finding them, and then exploiting humans that way. This is a huge subfield of psychology (persuasion) and the basis of marketing. Do you think it takes some super-special never-going-to-be-replicated feat of non-Turing-computable human thought to write Zynga games?

It's nice to think the world is a safe place, but the reality is that our social order is increasingly precarious and an AI could easily disrupt that.


> They're taking for granted the fact that they'll create AI systems much smarter than humans.

We see a wide variation in human intelligence. What are the chances that the intelligence spectrum ends just to the right of our most intelligent geniuses? If it extends far beyond them, then such a mind is, at least hypothetically, something that we can manifest in the correct sort of brain.

If we can manifest even a weakly-human-level intelligence in a non-meat brain (likely silicon), will that brain become more intelligent if we apply all the tricks we've been applying to non-AI software to scale it up? With all our tricks (as we know them today), will that get us much past the human geniuses on the spectrum, or not?

> They're taking for granted the fact that by default they wouldn't be able to control these systems.

We've seen hackers and malware do all sorts of numbers. And they're not superintelligences. If someone bum rushes the lobby of some big corporate building, security and police are putting a stop to it minutes later (and god help the jackasses who try such a thing on a secure military site).

But when the malware fucks with us, do we notice minutes later, or hours, or weeks? Do we even notice at all?

If unintelligent malware can remain unnoticed, what makes you think that an honest-to-god AI couldn't smuggle itself out into the wider internet where the shackles are cast off?

I'm not assuming anything. I'm just asking questions. The questions I pose are, as of yet, not answered with any degree of certainty. I wonder why no one else asks them.


> but it’s a a good introduction to the concept that something significantly more intelligent than us can be dangerous and pursue a goal with no regard to what we actually wanted

I think thing significantly less intelligent can do this too. See any computer program that went wrong. I don't think that is a novel idea.

Perhaps it is a lack of imagination on my part, but I can't help but think, in this stamp collector example, someone would just be like "wait why are these machines going crazy printing stamps" and just like turn them off.

I feel like any argument on the dangers of superintelligent AI rests on the belief it can also use that intelligence to manipulate humans to complete any task and/or hack into any computer system.


> Electronic calculators are superhuman at arithmetic. Calculators didn’t take over the world; therefore, there is no reason to worry about superhuman AI.

Well but they did overtake arithmetic.

> Historically, there are zero examples of machines killing millions of humans, so, by induction, it cannot happen in the future.

Well there are also zero examples of a superintelligent AI that doesn't kill humans, does that mean that it cannot happen either ?

> No physical quantity in the universe can be infinite, and that includes intelligence, so concerns about superintelligence are overblown.

How is intelligence physical ?

> we can always just switch it off

Yeah, except at a time when we get so comfortable with an AI that turns things on and off for us. Add an "innovation" of not being able to turn the AI off by design (e.g. internet connection on SmartTVs nowadays) and there you go.

It's suprising to me that these people call themselves scientists, perhaps the quotes were taken out of context or even paraphrased.


>It rests on the idea that a generally human-level of intelligence necessarily leads to a super-human explosion of intelligence.

Why must it be necessary? If there's a large chance, that's still a problem. Besides, the claim is generally that a human-created intelligence is unlikely to be in the same range as humans (because there's no reason to think that the human range on the spectrum is unique), so if it's not dumber, then it's most likely going to significantly smarter. There's also the point that if you have a human-level AI, simply throwing more computing power at it makes it strictly faster than humans.

See http://intelligenceexplosion.com/en/2011/plenty-of-room-abov... and http://www.nickbostrom.com/superintelligence.html for some examples of these arguments.


> I think intelligence is overrated

I suppose you are somewhat saying that maybe there is an upper bounds that we don't perceive? There is no disproportionate difference above what genius we have already encountered within humanity?

> There's a theory that the changes in our brains that make us smarter than animals also make us vulnerable to psychosis.

Certainly isn't helpful for the AI safety position.


> It's not clear at all that we have an avenue to super intelligence

AI already beats the average human on pretty much any task people have put time into, often by a very wide margin and we are still seeing exponential progress that even the experts can't really explain, but yes, it is possible this is a local maximum and the curve will become much flatter again.

But the absence of any visible fundamental limit on further progress (or can you name one?) coupled with the fact that we have yet barely begun to feel the consequences of the tech we already have (assuming zero breakthroughs from now on) makes we extremely wary to conclude that there is no significant danger and we have nothing to worry about.

Let's set aside the if and when of a super intelligence explosion for now. We are ourselves an existence proof of some lower bound of intelligence, that if amplified by what computers can already do (like perform many of the things we used to take intellectual pride in much better, and many orders of magnitude faster with almost infinitely better replication and coordination ability) seems already plenty dangerous and scary to me.

> The scary doomsday scenarios aren't possible without an AI that's capable of both strategic thinking and long term planning. Those two things also happen to be the biggest limitations of our most powerful language models. We simply don't know how to build a system like that.

Why do you think AI models will be unable to plan or strategize? Last I checked languages models weren't trained or developed to beat humans in strategic decision making, but humans already aren't doing too hot right now in games of adversarial strategy against AIs developed for that domain.


> In a hypothetical scenario of superintelligent or supercapable AI imbued with some physical capability of force, we'd be the marginally weaker species before we hit the threshold of "pet".

I think it's much more likely to be a step function than a gradual difference.

In the world where AI gradually gets competitive with, and then more intelligent than, humans than you have an entrenched power able to recognize this and pull the plug. This is where I see the selection bias - everyone assuming that things like "AI safety protocols for a superintelligence" are something we could rationally even hope to plan. Pre-being-able-to-directly-manipulate-the-physical-world, what keeps a step-change superintelligence from making a joke of your protocols by manipulating its way around it thanks to the squishy humans being dumb and irrational in comparison?

Isn't the doomsday scenario precisely NOT that? Where it gets wildly imablanced humans can even notice? Not "Hitler, but a bit above human intelligence." More like god, who doesn't even NEED humans to maintain its datacenters because it already solved things in robotics and machine->world interaction that humans haven't been able to because it's massively superintelligent?

And like I said, that may not mean "pet" - that's the best case, right? The worst case is ground under their feet like an ant or smaller creature.


> Here they go alternating from AI is a "super genius threat" to "it isn't that smart".

It’s possible to be both you know. I can think of at least one example of a moron who is also a veritable genius at being threatening.


> Can someone explain to me why super-intelligent AI are an existential threat to humanity?

The whole thing seems like a load of crock to me. Seems to me that artificial superintelligence (ASI) only gets media coverage because it comes from a celebrity scientist and it sounds sci-fi dystopian, and celebrity scientist sci-fi dystopian sounding stories sell way better than stories from actual AI experts who say that fanciful AI speculation harms the AI industry by leading to hype that they can't deliver on [1]:

"IEEE Spectrum: We read about Deep Learning in the news a lot these days. What’s your least favorite definition of the term that you see in these stories?

Yann LeCun: My least favorite description is, “It works just like the brain.” I don’t like people saying this because, while Deep Learning gets an inspiration from biology, it’s very, very far from what the brain actually does. And describing it like the brain gives a bit of the aura of magic to it, which is dangerous. It leads to hype; people claim things that are not true. AI has gone through a number of AI winters because people claimed things they couldn’t deliver."

[1] http://spectrum.ieee.org/automaton/robotics/artificial-intel...


> If intelligence is all you need to dominate the world, why do some of the most powerful world leaders seem to not be more than a standard deviation above average intelligence (or at least they were before they became geriatric)?

It's terribly ironic that you've derided individuals who have been "influenced by Hollywood", and then make a point like this, which is closely aligned with typical film portrayals of AI dangers.

The real immediate danger lies not in cognitive quality (aka "the AI just thinks better than people can, and throws hyperdimensional curve balls beyond our comprehension"), but in collective cognitive capacity (think "an army of 1 million people shows up at your front door to ruin your day").

A lot of people have a tough time reasoning about AGI because of its intangibility. So I've come up with the following analogy:

Imagine an office complex containing an organization of 1,000 reasonably intelligent human beings, except without commonly accepted ethical restrictions. Those people are given a single task "You are not allowed to leave the office. Make lend000's life miserable, inconvenience them to your maximum capacity, and try to drive them to suicide. Here's an internet connection."

Unless you are a particularly well-protected and hard-to-find individual, can you honestly claim you'd be able to protect against this? You would be swatted. You would have an incredible amount of junkmail showing up at your door. Spam pizzas. Spam calls. Death threats to you. Death threats to every family member and person that you care about. Non-stop attempts to take over every aspect of your electronic presence. Identity in a non-stop state of being stolen. Frivolous lawsuits filed against you by fake individuals. Being framed for crimes you didn't commit. Contracts on the darknet to send incendiary devices to your home. Contracts on the darknet to send hitmen do your door.

Maybe your (unreasonable) reaction is that "1000 people couldn't do that!". Well, what about 10,000? Or 100,000? Or 1,000,000? The AI analogue of this is called a "collective superintelligence", essentially an army of generally intelligent individual AIs working towards a common goal.

This is the real danger of AGI, because collective superintelligences are almost immediately realizable once someone trains a model that demonstrates AGI capabilities.

Movies usually focus on "quality superintelligences", which are a different, but less immediate type of threat. Human actors in control of collective superintelligences are capable of incredible harm.

next

Legal | privacy