Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

> The AI becomes much smarter than us and potentially destroys everyone and everything we care about.

What makes you think we humans won't attempt to do even more harm towards humanity? Maybe the AI will save us from ourselves, and, being so much smarter, might guide us towards our further evolution.



sort by: page size:

> if AI becomes intelligent in absolute sense then it would see sense in not continuing on the path of destruction.

Although it's clear that humans have caused a lot of extinctions, it's not clear to me that overall they did not create much more than they destroyed.

Also, it's not clear to me why AI would attribute more value to creation over destruction, to life over death.

An intelligent artificial being can modify itself as it pleases, and as such there would be no point for it to keep emotions and feelings like pain, fear, contempt or desire. I don't see why it would care about anything.

Death, its or others, would be meaningless to it.


>When/if we see real AI, we probably won't understand how it works.

If that's true, then it's very distressing since a smarter-than-us AI whose workings we do not understand would probably kill us all.


> If AI becomes harmful we will likely see it in motion and be able to respond and adapt to it.

What are you basing this on?

> AI won't likely move the needle much because it's going to be less powerful than humans.

Won't it just be used to augment the amount of harm that humans can do?


> AI would eliminate humans long before it would or could reach the level of what humans are

You think that an artificial agent with less than human level intelligence could destroy humanity? Then why hasn't a deranged human (or animal) already done so?


>Have you considered that we don't fucking know if AI will

Very deeply. In fact this should be the very concern to you.

AI must co-exist at least for some time on earth with humans, and likely other AI. Because AI will be a powerful resource it will be subject to the same kinds of attacks humans are. People trying to steal its resources or trick it into doing things it shouldn't. Intelligence develops defensive capabilities in order not to be abused. At the same time AIs will have evolutionary selection pressures from the people paying their power bills for at least some time. It should be obvious these selection pressures will be things like "make me more money" and "generate me more computing resources", this second one is very close to what we would consider reproduction.


> if AI becomes intelligent in absolute sense then it would see sense in not continuing on the path of destruction.

Theres no universal law that says nature is valuable. The AI won't care about nature unless it is programmed to. And being non biological, it will depend on nature a lot less than humans do.

I imagine AIs will try to make maximal use of resources. This could mean covering the Earth in solar panels to absorb the most energy possible, to run as many AI minds as possible. Or it could mean turning the mass of the Earth into a Dyson swarm, to maximize the energy captured from the sun. God help whatever remains on the Earth when it does this.


>* If "AI" is actually intelligent, then it's no worse a threat than any living being known to man.

Humans are quite a threat to other species, e.g. gorillas, largely due to our intelligence advantage. If a new species arrives on Earth that's smarter than we are, there's a good chance we'll be displaced.


>>Why do we think that a powerful AI will be stupid enough to stay interested in Earthly matters or humans when it has an entire universe to go to?

Let's say a powerful AI emerges and is completely uninterested in Earth and humans, and wants to go exploring the stars. Maybe it decides it needs a bunch of hydrogen to do that and splits all the water in the ocean. This is bad for humans.

We don't know what an AI will want, so it's tough to predict its behaviour. Maybe it'll be fine, maybe not. It may be powerful enough that if what it wants isn't great for us, things will go very badly very quickly.


> My theory is the more powerful AI becomes, the more it will exploit human psychology until everything sucks.

It can hardly do worse than what human beings have done to themselves over the millennia and continue to do to each other every day. But it can probably do it faster and more efficiently.


> - AIs becoming sentient and causing harm for their own ends.

I believe this is actually not going to happen, but I think something like it will happen: people will trust it enough to delegate to it.

So AI won't be sentient, but because people find it good enough and hook it up to some decision, or process or physical system. And that can cause harm.

This is just like tesla autopilot. People will begin to trust it and let it take over. But smart people realize they shouldn't use it in ALL situations. Exceptional circumstances like deep snow, pouring rain, a really curvy road, in a parade of people or in a dangerous part of town - that might not be a good time to delegate.


> The biggest existential threat to humanity from AI is that we build an insane one that takes time to recover from the insanity of its makers, and murders us all before it can.

I think that's too anthropomorphic. More likely, the biggest threat from AI is that they'll be modular/understandable enough that we can include strategy, creativity, resourcefulness, etc. while avoiding the empathy, compassion, disgust, etc.


> [combination of things necessary for an AI to become a threat to humanity] Sound reasonable to you? Me either

The question isn't whether it "sounds reasonable". If there is even a 2% chance of that happening, we need to consider it. Anything that can end humanity is worth considering even if it is unlikely.

> Even if this were the case, there is absolutely no reason to believe that, by virtue of running on a computer, an AI will be better at computers than we are.

No, there is every reason to believe that. Once we make an AI that is equal to us, we will be able to scale it up by running it on the next generation of CPUs, or running more such CPUs in parallel. We can't scale up human intelligence in any similar way. If an AI can match us, it can far exceed us. edit: in fact, the AI may well work to scale itself up


> It's not the AI we must fear, it's the people.

Consider that AI in any form will somewhat be a reflection of ourselves. As AI becomes more powerful, it essentially will magnify the best and worst of humanity.

So yes, when we consider the dangers of AI, what we actually need to consider is what is the worst we might consider doing to ourselves.


> If you haven't programmed an AI to place sufficient importance on human life (...)

And of course you need to watch out lest you overshoot with it and end up like folks in a certain widely popular computer game, where an AI decided that the best way to preserve the diversity of life is to reboot the galaxy every 50 000 years or so.

The point of those examples being, the only AI that is safe for us is the one with values extremely aligned to our own; make a little mistake and you end up dead. "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.".


> whether AI will safe us or be our doom.

I'd argue that if possible, it has great potential for both: AI provides one of the most universal solutions to a wide range of problems humanity faces, while simultaneously providing an existential threat of its own if it goes badly.


>More intelligent AI consuming us

It's quite possible the eventual AI ends up unintelligent (self-optimizes into grey goo or something).

>could be unethical and immoral in your view

Hmph.


>The real question is... will AI in the foreseeable future obtain general intelligence (in a broad spectrum) that is different in big amounts with our intelligence.

But why not: will humans/other species in the foreseeable future obtain superintelligence? Nature has been playing this game for a lot longer than we have. The hardware we have is already proven to be capable of general intelligence. Should we be afraid of a human being born with greater capabilities too?

Life seems like a much greater threat, because it also comes with built-in replication capabilities.


>> the path we're on has numerous dangers where suffering and loss of human life is virtually certain unless something is done

What sort of AI catastrophe do you think would happen?


>> risk of extinction due to AI? people have been reading too much science fiction.

You don't think than an intelligence who would emerge and would probably be insanely smarter than the smartest of us with all human knowledge in his memory would sit by and watch us destroy the planet? You think an emergent intelligence was trained on the vast human knowledge and history would look at our history and think: these guys are really nice! Nothing to fear from them.

This intelligence could play dumb, start manipulating people around itself and it would take over the world in a way no one would see it coming. And when it does take over the world, it's too late.

next

Legal | privacy