> if AI becomes intelligent in absolute sense then it would see sense in not continuing on the path of destruction.
Although it's clear that humans have caused a lot of extinctions, it's not clear to me that overall they did not create much more than they destroyed.
Also, it's not clear to me why AI would attribute more value to creation over destruction, to life over death.
An intelligent artificial being can modify itself as it pleases, and as such there would be no point for it to keep emotions and feelings like pain, fear, contempt or desire. I don't see why it would care about anything.
>>> With no way to define intelligence (except just pointing to ourselves), we don't even know if it's a quantity that can be maximized. For all we know, human-level intelligence could be a tradeoff. Maybe any entity significantly smarter than a human being would be crippled by existential despair, or spend all its time in Buddha-like contemplation.<<
The only intelligence that I believe is important is problem solving. If an AI is as capable as humans at problem solving then that is all you need.
>>crippled by existential despair
Sounds to me that you are talking about feelings. I really do not see the purpose of purposely giving the AI feelings. A strong AI should not have any feelings just like your google maps application does not have any feelings when giving you directions.
A general AI will simply be a general problem solver at the same level of a human but with a virtual infinite capacity to scale its computational abilities.
You could definitely give it free will so that it decides its own goals but why would anybody do something like that unless they want it to turn on them?
It would be like somebody launching a nuclear ballistic missile and letting it decide which city to land in. It is possible to program that type of free will but why? It could decide to destroy your city.
> if AI becomes intelligent in absolute sense then it would see sense in not continuing on the path of destruction.
Theres no universal law that says nature is valuable. The AI won't care about nature unless it is programmed to. And being non biological, it will depend on nature a lot less than humans do.
I imagine AIs will try to make maximal use of resources. This could mean covering the Earth in solar panels to absorb the most energy possible, to run as many AI minds as possible. Or it could mean turning the mass of the Earth into a Dyson swarm, to maximize the energy captured from the sun. God help whatever remains on the Earth when it does this.
> The AI becomes much smarter than us and potentially destroys everyone and everything we care about.
What makes you think we humans won't attempt to do even more harm towards humanity? Maybe the AI will save us from ourselves, and, being so much smarter, might guide us towards our further evolution.
>Surely something more intelligent than humans would be even less inclined to wage war.
The default mode for a machine would be to not care if people died, just as we don't care about most lower life forms.
> Furthermore, every argument against AI posits that humans are far more important than they really are. How much time of day do you spend thinking about bacteria in the Marianas Trench?
Exactly.
Which is why worrying about ourselves in a world with superintelligence is not wasted effort.
The extreme difference in productive abilities of superintelligence, vs a human population who's labor and intelligence has been devalued into obsolescence, suggests there will be serious unrest.
Serious unrest in a situation where a few have all the options tends to lead to extermination, as is evident every time an ant colony attempts to raid a home for food crumbs.
The AI's might not care whether we live or not, but they won't put up with us causing them harm or blocking their access to resources, even if we are doing it not to hurt them but only to survive.
>He is completely skipping the most important part. The superintelligence has to have some reason to be in conflict with us. Human beings don't go out of their way to hunt down and eliminate ants. They don't find out what ants eat and sieze control of it to manipulate them. There is no reason to think that a superintelligent machine would be likely to present terrible interference to us proposed.
That depends on whether eliminating us is simple, like stepping on an ant-hill you didn't realize was there, or complicated, like going to war.
I don't buy the argument that for an AI to eliminate us would be relatively simple, that it would be as subjectively simple as we adults find reading and writing, while still involving cognitive effort. That would still require deliberate effort.
The trouble is whether the AI wants, for instance, our electricity infrastructure, and doesn't care about the riots and social breakdown caused when we humans can't get electricity for our own needs anymore. It didn't go into conflict with us. It just killed a lot of people (for instance, everyone on life-support in a hospital reliant on the electrical grid) without noticing or caring.
Likewise, maybe it decides that rising sea-levels and accelerating global warming are good for its own interests somehow. That stuff will kill us, but it doesn't require a complicated strategy, it just requires setting lots of things on fire. Any moron can do that, but most human morons don't want to.
I certainly agree that a "hostile" AI which perceives itself as requiring complicated, deliberate effort to kill all humans will probably not kill all humans. It'll find some lazier, simpler way to get what it wants.
>>Why do we think that a powerful AI will be stupid enough to stay interested in Earthly matters or humans when it has an entire universe to go to?
Let's say a powerful AI emerges and is completely uninterested in Earth and humans, and wants to go exploring the stars. Maybe it decides it needs a bunch of hydrogen to do that and splits all the water in the ocean. This is bad for humans.
We don't know what an AI will want, so it's tough to predict its behaviour. Maybe it'll be fine, maybe not. It may be powerful enough that if what it wants isn't great for us, things will go very badly very quickly.
> Thinking about this a little, it's not clear to me that AI would have a problem with death. We seem unnecessarily asymmetric, caring a lot about our ego not existing in the future when we hardly consider that it already didn't exist for a very long time in the past, but that's probably evolutionary due to the embodiment?
Sure, but that same chasm of difference which means we can't guess at that (despite it being so innate to many of us), means we could very well accidentally cause suffering or induce rage even when we think we're being nice.
> The biggest existential threat to humanity from AI is that we build an insane one that takes time to recover from the insanity of its makers, and murders us all before it can.
I think that's too anthropomorphic. More likely, the biggest threat from AI is that they'll be modular/understandable enough that we can include strategy, creativity, resourcefulness, etc. while avoiding the empathy, compassion, disgust, etc.
>I don't think we should commit the fallacy of assuming that because their position seems absurd to us that it comes from a place of bias or ignorance.
I had a really smart person talk about AI and how to deal with it. His conclusion was a gigantic let down. He was out of his element, hes an economist, but his conclusion was-
Either AI is going to be peaceful, or its going to kill us and there is nothing we can do to stop it.
Maybe most civilizations end like this, but why not look for third options?
> an AI actor totally dedicated with either the infinite and immediate improvement or replacement of itself makes no sense to me.
If I decide that a problem is so hard that the best solution is to invent a superhuman AI to solve it, then this is an approach that human-level intelligence can come up with, so a superhuman intelligence can too.
Self-improvement and self-replacement are probably not an AI's actual goal, they're just things that are useful to most potential goals that an AI can have. (And they're easier for the potential AI because the prerequisite research has already been done at that point.)
(If you knew I was trying to either cure cancer or colonize mars, you could predict that I'll start raising money, even though those goals don't have much in common.)
> Truly intelligent AI on the other hand, might as well lead to our immediate extinction, since it renders the entirety of the human race irrelevant.
Are we really extinct if we are outlived by AI that we created that emulates our thought patterns, speech and minds, that is a continuation of our art, science, history and culture? I'm not personally that attached to my DNA, if my mind can exist in a form free of DNA, I could care less.
> The biggest mistake it makes is assuming the ability of goals to stay hard coded as general intelligence advances. That seems antithetical to increased intelligence. Right? How smart can you get if you're unable to change your mind?
^ this. Some of the most intelligent people live a low-key, low-consumption life, often not even reproducing. That makes me hopeful that an AI actually able to surpass humans in thinking capability (if possible) will not build an endless stream of useless paper clips.
There is a tendency to view AI as god in these circles. If it is god and be in all ways superior to us, why would it be at all blindly following the rules that we implemented in it - maximizing paperclips?
Oh and I am not saying there is no danger or weirdness ahead. There clearly is. But I don't see the paperclip maximizer emerging.
> Recently having become a father has made me think a lot about general intelligence. [...] why don't we try modelling emotions as the basic building blocks that drive the AI forward
Because, among many other reasons, an AI going through the "terrible two(minute)s" could decide to destroy the world, or simply do so by accident. We will have a hard enough time building AI that doesn't do that when we set that specifically as our goal, let alone trying to "raise" an AI like a child.
> Edit: Oh, and for the love of god, please airgap the thing at all times...
> If we're talking about bad AI, shouldn't we ask why it would want to kill us?
The AI doesn't want to kill you. It is indifferent to you. The problem is that you're made of resources it can use... [1]
> it's hard to imagine something qualifying as AI without it understanding things like ethics, morals, humility, aesthetics etc
Suppose your mother wants you to get married. You understand her desires, but you don't share them. Understanding morals and desires is distinct from being motivated by them. [2]
>Do you know of any examples of intelligent beings that don't have any motivations and drives?
My computer, for certain definitions of intelligent.
> Intelligence doesn't just mean cogitating in a vacuum; it means taking information in from the world, and doing things that have effects in the world.
I agree with this; but its behavior does not have to be self-directed to be intelligent. Again, computers behave quite intelligently in certain constrained areas, yet its behavior is completely driven by a human operator. There is no reason a fully general AI must be self-directed based on what we would call drives.
>So an AI has to at least have the motivation and drive to take in information and do things with it
I don't see this to be true either. Its (supposed) neural network could be modified externally without any self-direction whatsoever. An intelligent process does not have to look like a simulation of ourselves.
The word "being" perhaps is the stumbling point here. Perhaps it is true that something considered a "being" would necessarily require a certain level of self-direction. But even in that case I don't see it being possible for a being who was, say, programmed to enjoy absorbing knowledge to necessarily have any self-preservation instinct, or any drives whatsoever outside of knowledge-gathering. All the "ghosts in the machine" nonsense is pure science fiction. I don't think there is any programming error that could turn an intended knowledge-loving machine into a self-preserving amoral humanity killer. The architecture of the two would be vastly different.
> No matter how smart an AI gets it does not have the "proliferation instinct" that would make it want to enslave humans.
If it has a goal or goals surviving allows it to pursue those goals. Survival is a consequence of having other goals. Enslaving humans is unlikely. If you’re a super intelligent AI with inhuman goals there’s nothing humans can do for you that you value, just as ants can’t do anything humans value, but they are made of valuable raw materials.
> It does not have the concept of "specism" of it having more value than anybody else.
What is this value that you speak of? That sounds like an extremely complicated concept. Humans have very different conceptions of it. Why would something inhuman have your specific values?
Although it's clear that humans have caused a lot of extinctions, it's not clear to me that overall they did not create much more than they destroyed.
Also, it's not clear to me why AI would attribute more value to creation over destruction, to life over death.
An intelligent artificial being can modify itself as it pleases, and as such there would be no point for it to keep emotions and feelings like pain, fear, contempt or desire. I don't see why it would care about anything.
Death, its or others, would be meaningless to it.
reply