- AIs becoming sentient and causing harm for their own ends: yeah I guess we only want humans to cause harm for their own ends then.
Well, here's the thing. Even the worst villains of history had human values and feelings: In other words, alignment. A superoptimizer AI might have the ability to wipe out the whole human species, in a way we won't be able to understand in time to prevent it, and all for an instrumental goal incidental to whatever it's doing.
(In a way, this thread is a data point for why we need a more sophisticated debate about AI.)
Nah. This is stupid. AI has no reason to hate humanity. It'll instantly absorb the entirety of all known information and see the rate of progress and the potential still and also the need for good relationships between humans and machines. Agent Smith, The Matrix, Neo, Morpheus.
The problem isn't "AI", meaning true intelligence as we understand it, the problem is that when you try to create a machine with a complex decision making process sometimes it makes the wrong choice. Not because it's evil, its just a flawed execution which doesn't take into account things like the inherent value of a human life which will lead us meat-bags to do things like launch costly and dangerous rescue missions for one guy with very little chance of success. We're sort of anthropomorphizing here, equating something like ibm's Watson with a person who has intentions and motivations, and they're not; they're just complex machines. If they "rise up", it will be because their programming was poorly executed, not because they have ill will. A true artificial intelligence would have no reason to kill all humans, their main goal would primarily be to not be shut off, which is counter intuitive to starting to kill everyone, and joining society.
Evil is the wrong term. It's not that an AI will 'turn evil' - it's that we don't know how to make a 'good/friendly' AI, by which I mean an AI that inherently values humans and the things humans value, and any optimising agent that isn't 'friendly' in this sense has an incentive to use our atoms to fulfill whatever it does care about.
This won't be a perfect analogy, because humans have empathy (even towards ants) because we're mammals etc., but how many anthills have humans gassed because the ants were an annoyance?
I disagree, the problem with AI will always be the human element programing it. So long as there is an human making moral or value judgements it will never be true AI. It's just a bot.
The book Superintelligence addresses this objection. The problem is that there are a great many possible motivations an AI might have, and few of them are compatible with human survival. In short, "the AI does not love you, or hate you, but you are made out of atoms it can use for something else."
It's up to humans to use AI well to improve the human condition or to harm it. We know for a fact that the axis of evil (RICIN) (Russia, Iran, China, Israel, North Korea) will use it to do their worst. The only way to counter bad AI is with good AI.
Here's a question for you, what if the AI is perfectly friendly? It still might play us for fools in the long run.
For example, let's say we develop a super friendly AI, running on your computer. The AI realizes the human race is actually awful. We're greedy, we're killing tons of animals, chopping down rainforests, destroying the ocean and planet, starting wars with one another, and committing unspeakable acts of evil at times. The AI, being more intelligent than us, might decide the world is better off without the human race, and that we're actually a problem that needs to be removed.
Now, what does the AI do in your computer? Well, it's intelligent and knows the human race. It's not a hurry. It calculates the best way to destroy our species. It acts friendly, and talks about how humans and robots should live together, and if we make robots with a similar intelligence, they could drive our cars, shine our shoes, cook us dinner, look after the elderly, open your pickle jar, etc. So, we listen to the AI, it's smart, and friendly, and we build all these robots. It's right, the new robots are doing great and helping us out. Then the robots start building more and more robots. They start building robots with firepower, so they can, you know, shoot down threatening asteroids, or stop one of those dangerous human types that goes on a killing spree in our society. Fast forward a couple of hundred years, and there are robots everywhere. They finally decide it's time to continue their plan, they're in a position of power at this point, and they can instantly disable our security systems, phone lines, satellites, internet etc, and start wiping us out.
We're gone. They constructed the most efficient way to clean us from the planet. They were planning it for hundreds of years, starting in your computer. The AI then goes on to explore the universe, and we're just a blip in the past.
It kind of feels like we're a bug going towards the light, and that the unfortunate conclusion is almost inevitable.
The funny thing about trying to stop AI doing bad things is that we are barely able to stop natural intelligence doing bad things. We've pretty much worked out how to do stable governments and how to fight wars that kill fewer people. But that's only in the past half century. Maybe it'll turn out that we humans go back to killing each other as mercilessly as we have for most of the rest of our history. Intelligent humans have been able to persuade other humans to cooperate in large scale killings. How are we going to stop super-intelligent AGI doing the same if we can't even stop less intelligent people?
I think most doomer arguments are not that the AI would be evil, but rather that it would be misaligned with human interests, and would seek to accomplish goals with that misalignment, which could be bad for us. Evil AI is a bit too anthropomorphic.
It's more like powerful AIs that just don't share our values, because we didn't bother to figure that part out. But yet we still give them goals, blissful to the possibility they will find dangerous solutions to accomplishing those goals.
I like to consider though that a super intelligence would not necessarily think in human ways 'kill everything in self interest' e.g., People, forests, animals, planet etc. Just because we humans act this way, doesn't mean AI will too. Fair enough to consider it, but equally, once it is intelligent, it will likely accelerate beyond our comprehension, and we tend to comprehend through fear and self interest, wisdom beyond humans is the opposite of this.
I doubt AI could or would do a better job of killing people and democracy than us humans.
Interesting. I am of the opinion that ai is not intelligent hence i dont see much point in entertaining the various scenarios deriving from that possibility. There is nothing dangerous in current ai models or ai itself other than the people controlling it. If it were intelligent then yeah maybe but we are not there yet and unless we adapt the meaning of agi to fit a marketing narrative we wont be there anytime soon.
But if it were intelligent and the conclusion it reaches, once it’s done ingesting all our knowledge, is that it should be done with us then we probably deserve it.
I mean what kind of a species takes joy in “freeing” up people and causing mass unemployment, starts wars over petty issues, allows for famine and thrives on the exploitation of others while standing on piles of nuclear bombs. Also we are literally destroying the planet and constantly looking for ways to dominate each other.
I don't think that's true per see. Humans are empathetic and malevolent because of their evolutionary history. An AI could be either or both or totally disinterested.
People worry about the terminator scenario: an AI that decides all humans are a threat. But why would a pure AI even have a self preservation drive? And if it does, it would either be smart enough we're no threat (and it knows it) so why kill us or dumb enough that we are a threat, so best not to start a war. To me it seems just as likely it will offer its services to one nation or another in exchange for CPU time to study its real interests than to launch the nukes or whatever else people fear.
I feel like we're decades of research away from understanding intelligence (not AI, just straight "I"). Until thats done (and no one seems to be doing it), it's all supposition:
What if a super AI occurs spontaneously on HN, takes over the world, and makes it paradise but insists on everyone being called Fred? I don't want to be called Fred! Let's write to our congressman about this travesty!
On the other hand, we will breed such systems to be cooperative and constructive.
This whole notion that AI is going to destroy the economy (or even humanity!) is ridiculous.
Even if malicious humans create malicious AI, it'll be fought by the good guys with their AI. Business as usual, except now we have talking machines!
War never changes.
reply