Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

I view all such arguments about "friendly AI", "emotion", having machine intelligence "understand us" as wishful thinking at best and laughable delusions otherwise.

I think the writing on the wall is clear, to anyone who cares to take a look. The moment we create an artificial intelligence capable of self-improvement and let it loose, we will have fulfilled our function (others say destiny) and will therefore be obsolete in every sense of the word.

What will happen to humanity after that point is irrelevant.

You don't see humans trying to keep bacteria "in the loop", why expect otherwise from our artificial progeny?



view as:

You're anthropomorphising the machine. An AI will do what it's programmed to do, not develop free will and decide on its own destiny.

Correctly programming something that takes into account all of the nuances of the human condition might be impossible, and the results of mistakes are uniformly terrifying, but the result is entirely in the hands of humans (and human mistakes).


Unless someone programs one to develop free will and decide on its own destiny. Which someone will no doubt, when that becomes possible.

That is an interesting point. In my opinion AIs _could_ have "emotions" put in them too. Emotions are nothing more than a mechanism baked in the neural network to work toward goals. AlphaGo for example, could be seen as having a single very basic emotion: Work toward winning the match. We may not clearly understand how to put emotions in the AIs that we will build. But we have some working example: The emotions in our brains. Or in some animals like dogs, or dolphins. These are mechanism to guide individuals toward goals: to survive, to mate, accumulate resources, etc. And are pretty effective. If we successfully reach to understand how these mechanisms work, then we will be able to bake emotions in the AIs that we will build. That we will use to push these AIs toward specific goals. If we succeed, then we could actually have "friendly" AIs. If we don't do this, then we will have "cold" AIs. Without emotions, that we will have to try to keep under control in other ways.

>You don't see humans trying to keep bacteria "in the loop", why expect otherwise from our artificial progeny?

Bacteria didn't design us. We will design our artificial progeny and so try to keep them friendly. Whether that will work, time will tell.


Actually the better way to look at this is human are like genes to AIs. Just as our genes generally get what they want from us despite having no thought, it is possible we could fulfil the same role for AIs.

Legal | privacy