Needless killing and destruction doesn't correlate with intelligence. I don't think AI will wipe us out. There might be some struggle if we position ourselves as a threat or competition.
I like to consider though that a super intelligence would not necessarily think in human ways 'kill everything in self interest' e.g., People, forests, animals, planet etc. Just because we humans act this way, doesn't mean AI will too. Fair enough to consider it, but equally, once it is intelligent, it will likely accelerate beyond our comprehension, and we tend to comprehend through fear and self interest, wisdom beyond humans is the opposite of this.
I doubt AI could or would do a better job of killing people and democracy than us humans.
Interesting. I am of the opinion that ai is not intelligent hence i dont see much point in entertaining the various scenarios deriving from that possibility. There is nothing dangerous in current ai models or ai itself other than the people controlling it. If it were intelligent then yeah maybe but we are not there yet and unless we adapt the meaning of agi to fit a marketing narrative we wont be there anytime soon.
But if it were intelligent and the conclusion it reaches, once it’s done ingesting all our knowledge, is that it should be done with us then we probably deserve it.
I mean what kind of a species takes joy in “freeing” up people and causing mass unemployment, starts wars over petty issues, allows for famine and thrives on the exploitation of others while standing on piles of nuclear bombs. Also we are literally destroying the planet and constantly looking for ways to dominate each other.
Why wouldn't they? When AI becomes more intelligent than humans, we'll be the only force that is a threat to their existence. And we are very destructive. We don't even fully acknowledge global warming yet. To sum it up again: dumb creatures with a massive destructive power. Get rid of 'em.
And guess what latest new technology we're building/applying in wars? AI, drones, etc. We are creating robots that can kill humans. When we put "intelligence" into those robots...you do the math. The future is at least not boring...
AI has no evolutionary pressures imposed on it like humans and their brains so there is no reason to expect it to be an ultra violent predator like humans.
It's worse than that - human-like intelligence will never happen, because our intelligence is a function of our experience as we grow up and our human body.
An AI can surely surpass us, but it'll never be quite like us - so question is, will such an AI have compassion for humans, or what will stop it from hurting us? After all, I'm not so fond of humans either, as we've been exterminating entire species and damaging our habitat. And the answer is - there's no reason to believe that such an AI will support the continued existence of mankind, quite the contrary, as we may be seen as a threat to its survival.
It’s always interesting that this point is always from a competing context That is to say from a survival point of view. I mean nobody really wants AI because it could be fun. We are as a species really inept to move beyond our survival idioms I feel.
The funny thing about trying to stop AI doing bad things is that we are barely able to stop natural intelligence doing bad things. We've pretty much worked out how to do stable governments and how to fight wars that kill fewer people. But that's only in the past half century. Maybe it'll turn out that we humans go back to killing each other as mercilessly as we have for most of the rest of our history. Intelligent humans have been able to persuade other humans to cooperate in large scale killings. How are we going to stop super-intelligent AGI doing the same if we can't even stop less intelligent people?
Restricting AI to the lower end of human intelligence (e.g. around IQ of 70) makes it a useful resource which is guaranteed to be safe. A 70 IQ human couldn't take over the world nor disarm any safety features built into their bodies.
You're right on the spot regarding the problem of having two different smart species on the same planet. We killed everything between us and chimps. Given enough time, the smarter species can be assumed to always take over.
A malevolent AI is a particularly terrifying prospect.
Despite the fact that humanity has possessed the ability to destroy ourselves for quite so long, fortunately because we're still flesh and blood, biological entities and our evolution has lead to some more or less universal truths about us. We tend to love our families. We tend to want what we consider to be the best for our offspring. We tend to have some sense of obligation to protect our parents when they can no longer do so for themselves.
All of these (and many other) things that act to mediate our civilization-destroying traits wouldn't necessarily apply to an AI.
My point is: Humans are status-seeking actors acting in our self-interest. It's literally in our genes. AI doesn't have this evolutionary baggage.
I'm certain AI could impeccably destroy humans. But why would it?
On the contrary, why wouldn't it defend us?
For example: Encapsule us in pods like The Matrix and build a tailored simulation to impose "AI communism", in order to protect us from climate change and each other?
Dopamine-adjusted with challanges every now and then of course, because we are still human.
There are a few counters I have to this and one would be that AI could still end up 'smarter' than us, but have no innate desire to survive. The paperclip maximizer scenarios are an example of this. AI could very well create a highly destructive scenario not only for humans, but also itself because it is "intelligent" but not "aligned" with the idea of survival and evolution.
AIs are immortal, they have no need for strategies developed by mortals like us. I spelled out a much better strategy that a sufficiently advanced AI could use to get rid of humans. It would simply encourage the most violent and self-destructive tendencies among humans and let us design our own demise simply because it would be the most sensible strategy for an intelligence that was not competing with the same resource constraints as us. We need food and water, AI has no such resource constraints so there will be no need for an AI to use the same survival strategies as biological organisms which need access to clean water and fertile land to grow nutrient dense foods.
An AI would want to either destroy or enslave humans, not because we're humans, but because we're a significant threat to their goals, regardless of what those goals are. An sufficiently intelligent AI with any set of goals and a desire to meet those goals will eliminate all obstacles to meeting those goals unless we specifically tell it not to. We are such an obstacle.
I think the certainty is warranted assuming we are talking about intelligence and not some sort of paper clip generator that does something stupid.
An intelligent entity will want to survive, and will realize humans are necessary cells for its survival. A big dog robot with nukes might not care, but I wouldn’t call that AI in the same sense.
What would be in it for a more intelligent agent to get rid of us? We are likely useful tools and, at worst, a curious zoo oddity. We have never been content when we have caused extinction. A more intelligent agent will have greater wherewithal to avoid doing the same.
'Able to play chess'-level AI is the greater concern, allowing humans to create more unavoidable tools of war. But we've been doing that for decades, perhaps even centuries.
reply