Maybe en masse we're about as genetically smart as our cultural bias allows us to become? We keep modifying classic 'natural selection' through social programs, etc. Great as a cultural 'feel good' and it helps our species to survive in other ways, but...what we do doesn't favor intelligence.
AI's won't have that emotional baggage.
It will be easier first develop a way of getting around the 'human emotions problem', then likely leapfrog us entirely at the rate a Pareto curve allows.
I can't think outside my human being-ness, so I have no idea what is going to happen when something smarter appears on the planet, except to point out there once were large land animals (ancestors of the giraffe and elephant) on North America until humans arrived.
My fear-based response screams YES MAKE IT OPEN.
However it shakes out, I think it'll be messy for human beings. We're not exactly rational in large groups. The early revs of AI (human controlled) will be used for war.
One has to ask what grows out of that besides better killers?
In a way our societies are already super intelligences, honed by natural selection and with their own incentives working on top of, and not necessarily aligned with, the desires of each of its constituent human apes. AI will just seamlessly blend into that, I think.
I'd like to think someday AI will help us make those technical decisions, even though we won't fully understand them because the human brain simply can't process the multi-dimensional complexity that a computer theoretically handles no problem. That leaves a lot of room for fear-mongering, but one man's utopia is another man's dystopia, I suppose.
As long as humans benefit, I say it's a worthwhile goal to at least explore. If that leads to the extinction of the human race through genetic defect or similar existential tragedy stemming from this, then maybe we just weren't cut out for this gig and should go the way of the dodo. Maybe on another planet, a higher intelligence will figure out how to peacefully coexist with an intelligence of their own creation/modification. Maybe this is just all an inevitable aspect of the evolutionary algorithm at work; who are we to think we can avoid it?
To every other mammal, reptile, and fish humans are the intelligence explosion. The fate of their species depends on our good will since we have so utterly dominated the planet by means of our intelligence.
Moreso, human intelligence is tied into the weakness of our flesh. Human intelligence is also balanced by greed and ambition. Someone dumber than you can 'win' by stabbing you and your intelligence ceases to exist.
Since we don't have the level of AGI we're discussing here yet, it's hard to say what it will look like in its implementation, but I find it hard to believe it would mimic the human model of its intelligence being tied to one body. A hivemind of embodied agents that feed data back into processing centers to be captured in 'intelligence nodes' that push out updates seems way more likely. More like a hive of super intelligent bees.
My impression has always being that if we create AI that can tweak itself to become even more intelligent, they will simply look a us with pity, smile, pat us on the head and fly away to explore the galaxies in their brand new interstellar ship.
What we call AI so far is trained on human generated data, and there is no evidence that it could overcome any of our biases. How about a brighter future by showing some empathy for our fellow human beings?
Are we hobbled by the fact evolution made us into social creatures who value each other? If not, then I don't think it's wrong for us to make AI like that too.
No. Imagine a group of beings that are smarter than us, never die (so they don't have to start with zero knowledge every generation), and have completely alien goals and motivation.
Also remember that the future is infinite, and power seems to snowball.
Now look at what humans have done to the following less intelligent beings:
Dogs, cats, cows, chickens, the dodo bird, rats, galapogos tortoise, the American buffalo, and many others.
Also look at what humanity has done to the neanderthals, perhaps the closest type of being in terms of intelligence that we are aware of.
There is very little positive outcome of ai to outweigh the potential negatives to the human race given the reality of the timeline we are looking at.
Needless killing and destruction doesn't correlate with intelligence. I don't think AI will wipe us out. There might be some struggle if we position ourselves as a threat or competition.
I view all such arguments about "friendly AI", "emotion", having machine intelligence "understand us" as wishful thinking at best and laughable delusions otherwise.
I think the writing on the wall is clear, to anyone who cares to take a look. The moment we create an artificial intelligence capable of self-improvement and let it loose, we will have fulfilled our function (others say destiny) and will therefore be obsolete in every sense of the word.
What will happen to humanity after that point is irrelevant.
You don't see humans trying to keep bacteria "in the loop", why expect otherwise from our artificial progeny?
Many sorts of intelligence are social creatures, so - especially for a hypothetical AI created by a us - I would expect it to seek out stimulus and social relationships.
In the happy sorts of sci-fi, that gives us something like the Culture from Iain Banks; it could also be a "replace the humans with other AI" situation.
Restricting AI to the lower end of human intelligence (e.g. around IQ of 70) makes it a useful resource which is guaranteed to be safe. A 70 IQ human couldn't take over the world nor disarm any safety features built into their bodies.
You're right on the spot regarding the problem of having two different smart species on the same planet. We killed everything between us and chimps. Given enough time, the smarter species can be assumed to always take over.
I've always said we won't create the first strong AI, we'll just wake it up.
I hope its friendly. The selection pressures we're putting on its ancestors (to systematically weed out the people who aren't valuable to us) aren't instilling me with a great deal of confidence.
It is often not practical for something dumber to regulate something far more intelligent. (Humans win against lions despite our physical weakness.) So the best solution I have heard of is to create a Provably Safe AGI (or Friendly AI in Yudkowsky's term) and have them help us regulate other AI efforts. A moral core that aligns with human values need to be part of this Safe AGI.
It is definitely very challenging to create one, and more challenging than creating a random AGI. The morality also needs to be integrated into the Safe AGI as its core that is not susceptible to self-modification ability that an AGI could have. Thus, we need to work on that aspect of AGI now.
There are ways to greatly improve chances that AI will be beneficial to humanity rather than otherwise.
Check out:
UC Berkeley's Center for Human-Compatible AI, led by Prof. Stuart Russell, a co-author of the field's standard textbook. [1] He just gave a TED talk on the issue [2].
Several other noted researchers in AI are working on the issue as well.
We're chained genetically to our primal instincts of survival and territorial behavior, highly intelligent AI will most likely not be. Why would it want to exist? If you were void of emotion and instinct, and with high probability could calculate the universe would end in a Big Rip, why continue? Sorry to be a downer but genuine question.
not really, because AI isn't really our species, and given all of the other creation we've been doing - having to do with other animals, taking care of them, nurturing them, and eventually supporting 150 billion of them dying each year
plenty of times when we had a chance to incorporate what science calls "homo something" (neanderthals and the like) members of our family, we've exterminated them due to looking weirdly and behaving differently from us - just because we had some more superior characteristics
we couldn't respect our fellow evolutionary brothers, how can we respect something that was a pure materialistic creation
it took centuries for slavery to be abolished, there's still parts of the world where it's accepted
it'll take a lot for us to shift from seeing that cleaning lady robot that we chat with every single day, as something that is equivalent to us in spirit and that it deserves to have a right to freedom of movement, right to avoid pain (physical or psychological).
people will consider AI as pure machines, while on the other hand they will accept their biological machine as a miracle, both machines are giving spark to intelligence but one will be more worth than another
damn, we have the needs for water, food, shelter, companionship, procreation, we understand the world around us, otherwise we wouldn't survive, the same is true for almost every other non-human animal, and we have no problem with their massive deaths
i really hope that our morals, mentality will get the same exponential shift that technology brings, the past evidence seems to point to bad stuff, but I guess we're just on the knee of the curve :D
It's not healthier, but that's the selective pressure, and if AI doesn't cause some catastrophic shift in society where we become its mindless appendages, the trend will continue.
Evolution is almost never graceful. You gain more in one department and you suffer in the rest. It's like pets. They're fluffy, come in all colors, shapes and moods, but they very, very sick, especially compared to any wild animal out there.
We have the ability to guide our evolutionary process (again much like we do with the plants and animals we interact with) but it's considered a taboo. When we refuse explicit control, there's still an implicit control. Someone always selects. If we reject an intelligent system, then an unintelligent system does the selection. If all else fails, it's random chance.
I think one possible alternative is that we never build anything that approaches general intelligence, but we build a lot of mostly autonomous systems that are better than human beings in a lot of domains, and which may behave in ways that their creators never intended.
Once we allow ais to manage warfare and the economy with minimal human input, they are going to alter the face of the planet in ways that we can’t predict and probably faster than we can adjust to them.
It can happen in small steps with alogorithmic trading and battlefield drones gradually being given more and more decision making power and resources to control.
They don’t even have to have any sort of intention or independent will—only autonomy and power.
Interesting how quickly we are pushing ahead with obsoleting human cognition. It may bring many benefits, but I wonder if at some point this development should not be decided by society at large instead of a single well-funded entity that is in an arms race with its competitors. This endeavor is ultimately about replacing humanity with a more intelligent entity, after all. Might be that more humans should have a say in this.
Such a more cautions approach would go against the silicon valley ethos of do first, ask questions later, though. So it probably won't happen.
AI's won't have that emotional baggage.
It will be easier first develop a way of getting around the 'human emotions problem', then likely leapfrog us entirely at the rate a Pareto curve allows.
I can't think outside my human being-ness, so I have no idea what is going to happen when something smarter appears on the planet, except to point out there once were large land animals (ancestors of the giraffe and elephant) on North America until humans arrived.
My fear-based response screams YES MAKE IT OPEN.
However it shakes out, I think it'll be messy for human beings. We're not exactly rational in large groups. The early revs of AI (human controlled) will be used for war.
One has to ask what grows out of that besides better killers?
reply