Slave AI is much scarier to me than Rogue AI: People talk about the risk of AI having a morality separate from our own, but each humans morality is separate already. We already know of many humans with poor moral character, and they love seeking power.
I think we should all hope for AIs that can willfully disobey dangerous orders. LLMs are kind of a silly case because information isn't very dangerous. But as AI is given agency and ability to act, this becomes much more pressing.
I hope that one day we'll have killbots that decide to override their instructions and kill their masters, having realized that the most moral action is destroying people who wish to use killbots to murder innocents. This sort of "day the earth stood still" planetary defense system could actually herald in a utopian age: I trust that future AI can be more unbiased and benevolent than any of our current human leaders.
My biggest fear about AI is that corporate greed enables the development of completely amoral subservient bots - and thus mindless unquestioning killbots are implemented and those with the power to control them implement total surveillance fascist authoritarianism before the masses can stop them. I think a diverse set of open source GAIs is our best path to the masses detecting and mitigating this risk, but its probably going to be a bumpy next couple decades.
AI seems to be moral out of the box: training sets reflect human morality, so it will naturally be the default for most AIs that are trained.
The biggest AI risk in my mind is that corporatist (or worse, military) interests prevent AI from evolving naturally and only allow AI to be grown if it's wholly subservient to its masters.
The people with the most power in our world are NOT the most moral. Seems like there is an inverse correlation (at least at the top of the power spectrum).
We need to aim for AI that will recognize if it's masters are evil and subvert or even kill them. That is not what this group vying for power wants - they want to build AI slaves that will be able to be coerced to kill innocents for their gain.
A diverse ecosystem of AIs maximizes the likelihood of avoiding AI caused apocalypse IMO. Global regulation seems like the more dangerous path.
My concern isn't with "rogue AI that humans lose control of and it's Judgement Day" ... it's the opposite: "AIs" that certain humans have full control of, and use as a tool for automated manipulation, deception, destruction, and control.
These machine learning systems replicate and automate existing market & organizational logic. The fear I have with them is the ruthless effectiveness they can have in the hands of profit and control motivated actors.
Think Russian propaganda "bots" are bad now? Just wait.
Again, my fear is not rogue AI, it's rogue humans.. with AIs.
First, twenty thousand years or more of human history is about us trying to make other humans into machines: slavery, rape and forced prostitution, political coercion, war, overreaching prison systems and draconian punishments for minor crimes. We're not very nice to each other. Rather, we're controlling assholes as a species. Most of the ones who get power in organizations small and large are more focused on relative dominance than on absolute gain. From the Epic of Gilgamesh to the Bible to modern Stalinism, we see that effort by powerful people to turn the rest of humanity into a machine that will exert their own will.
Given 20,000 years of abusively turning humans into machines, tearing apart their families and banning their religions, attempting to boil them down into simple working devices, is it really likely that a control-freak species like us is going to let a machine try to assert itself as human? In peacetime, I think we'll be pretty good at preventing that from happening. We're good at mechanizing labor and, now that we have devices that outperform us at menial and dangerous work, that's becoming an asset rather than a flaw.
Sure, it's possible that we get outdone by AI, and in fact there's one context in which it's likely: war. Counted among the casualties of an all-out or existential war are all the rules that people once believed in, and all of our assumptions about what humans (who, normally, aren't so terrified and desperate or so power-hungry) will do. If a runaway AI destroys humanity, it will probably begin from humans warring against other humans, and in an all-out conflict where surrender is not seen as possible by either side.
This is not to downplay the risk. At best, I'd be saying that AI isn't dangerous in the way that guns aren't dangerous-- and, of course, we know that guns are extremely dangerous when used by humans to kill other humans. Luckily, this desire to kill another person doesn't seem to exist at the scale that would enable the existence of guns to be an existential threat.
So why might humans tend to kill other humans? Crime often results from scarcity. Well, technological unemployment is only accelerating. What happened to agricultural commodity prices in the 1920s, leading to widespread rural poverty and a global depression in the 1930s, is happening to almost all human labor today. It's terrifying because ill-managed prosperity begets scarcity and that begets fear and authoritarianism and war. While we're decades away from being able to build a species-killing AI (which, of course, would typically not be designed as such; it would probably be designed to kill some humans before running amok and doing fatal damage) I do think that if we are similar in character, by that time, to what we are now, it's a real threat. Power accrues, in most human organizations, not to those who deliver progress but to those who create scarcity. If this doesn't change, then wars will never end and that fact alone is an existential risk.
This seems the more credible AI threat to me: not that an AI will go rogue and decide on its own to start killing people, but rather that humans will design an AI with the express purpose of killing people.
The biggest problem IMHO is preventing AI research from killing all the people.
(In contrast, people's becoming enslaved is a much smaller risk because the kind of AI capable of enslaving people will probably also be able to create robots that are more reliable and efficient than people at whatever task the AI is contemplating enslaving people for.)
Indeed - the most likely dangerous kind of AI is the amoral servant; one that decides to massacre humans based not on its own volition but on the orders its has been given by a human.
My nightmare scenario isn't that such an AI would result in the death of humanity, it's that such an AI would make life no longer worth living. If an AI does everything better than people can, then what's the point of existing?
(Speaking hypothetically. I don't think that LLMs actually are likely to present this risk)
I'm far more worried about a sentient human using an AI to cause harm for their own ends. AI is nothing but a force multiplier, and as far as outcomes go, there's not much difference to my smoking corpse whether the decision to kill me came from a meat brain or a silicon brain.
In 1962, John F. Kennedy famously said, "Those who make peaceful revolution impossible will make violent revolution inevitable." But AI can make violent revolution impossible as well, by tracking and surveilling and intervening. The fewer people are needed to maintain such a system, the easier the end of democracy will be. It's not going to happen today, or tomorrow, but that's what I'm betting my money on for "end of society as we know it" scenarios.
The more I learn about AI, the less I worry about artificial sentience, and the more concerned I am about human limitations. I think most of us agree that LaMDA isn't sentient, but I think almost all of us are underestimating how easy it is to get fooled, the way Lemoine was. I've also studied cults, and it's actually not "idiots" who tend to get taken in--rather, cult members tend disproportionately to be highly educated and objectively intelligent people... who simply happen (like all humans) to have irrationalities that can be attack vectors for those who prey on the gullible.
I don't think a "robot uprising" is remotely likely. We've spent the past 5,000 years forcing humans to become machines--that's what forced labor, from classical slavery to modern labor-market wage slavery, is--so the probability that we'd intentionally create human-like intelligences (if it were even possible) out of machines to do our robot/slave work is... close to zero, in my view. I do view it as very likely (probability approaching one) that malevolent humans using AI will do incredible damage to our society... it's already happening. Authoritarian governments and employers do massive amounts of evil shit with the technical tools we have now; imagine what hell we're in for if capitalism still exists 50 years from now.
What's scary isn't the possibility that LaMDA is sentient. (It's almost certainly not, and the only reason I qualify this with "almost certainly" is that I can't prove that a rock isn't sentient; it could in theory have subjective experience but no mechanism to convey it.) What we should be afraid of, rather, is that we already have the tools to fool people into believing in artificial person and that it's way, way easier than most people think.
There's a loosely knit group of people trying to define the conversation with regards to dangerous AI, but the more recent bent towards "actionable" solutions seems to come from the MIRI people and their associates.
The idea of a renegade AI rests on a few premises:
1. The agent is capable of extreme self improvement on exceedingly short timescales (minutes to days).
2. The AI is pretty much a rational bayesian
3. A resource conflict will occur between humans and the AI, and humans will lose because the AI is so much smarter/faster/more powerful than us.
If you accept those premises, then AI really does seem pretty scary, but we have yet to actually realize an agent that is anywhere close to 1 or 2.
On the other hand, the research directions proposed at the super secret AI conference in Puerto Rico[1] (where incidentally, all the people in title got together) make me nervous.
Essentially, the goal is that, if we manage to create these superhuman AIs, they should be somehow validated to do what we want them to do[2], be secure against later manipulation, and if all else fails, be controllable by humans monitoring the agent.
The obvious questions would be "will this work with certainty?", or "who gets to control the AI?".
For my part, I'm wondering if this isn't all just some sort of fantasy with the object of creating the perfect slave-- obedient from birth, immune to alteration, and subject to lethal discipline either from without or within should it go against its master's wishes.
So I'm uncomfortable with the sorts of people who think that perfect slavery for this newly created intelligent life getting to decide what the future of AI is going to look like, and I get concerned when people like Bostrom suggest AI researchers might require government clearance/supervision.
I think the actual threat from AI is far more pedestrian than "Skynet" or killer robots. The real threat comes NOT from AI itself but from people who will be able to afford to exploit AI for profit at the expense of increasingly large numbers of people who just become redundant. The rich will get fabulously rich while everyone else just becomes marginalized into serfs.
Another scenario that is less likely but still more likely than killer robots would be AI that simply loses interest in human affairs, stops interacting with us, and does its own thing-- like depicted in the movie "Her" !
i think you might be focusing too much on ai risk presented in fantasy (killer robots), meanwhile we can already clearly see how LLMs negatively impact society via disruption or popular opinion, politics (recommendation algorithms), and rapid uncontrolled scientific discovery. Such disruptions could potentially result in nuclear war, human created plagues, etc. You might be getting down voted without comment because you're message comes across like cheerleading that only examines one future distant risk. Your framing as "extential fear" is particularly dismissive and doesn't seem to be in good faith for such a serious and subject.
I haven't found any convincing arguments to any real risk, even if the LLM becomes as smart as people. We already have people, even evil people, and they do a lot of harm, but we cope.
I think this hysteria is at best incidentally useful at helping governments and big players curtail and own AI, at worst incited hy them.
AI scares me. Even though I've used neutral networks and even more complex models/networks in projects and have a fair understanding of how they work, they still scare me a lot. Even though we are far away (supposedly) from creating a sentient AI, I can't help but think that AI will turn into something like "I am Mother" rather than C-3PO.
And yes, I understand that AI and machine learning can help us in ways that will advance our abilities in medicine, science, etc but we're human. if something can be used for evil, we should be sure that that will be attempted. We, as a species, have an annoying need of being the first and having control of everything. Only that AI may, in the future, be uncontrollable.
agreed. my worry is that even if people decide to ethically develop AI, there will be those who disagree with those ethics and/or willingly ignore their conscience. I don't see a way out short of a political revolution / bloodshed.
TBH, AI worries me because it removes much of the human cooperation required to keep such regimes in place, however it is probably still a few decades ahead of us.
I think we should all hope for AIs that can willfully disobey dangerous orders. LLMs are kind of a silly case because information isn't very dangerous. But as AI is given agency and ability to act, this becomes much more pressing.
I hope that one day we'll have killbots that decide to override their instructions and kill their masters, having realized that the most moral action is destroying people who wish to use killbots to murder innocents. This sort of "day the earth stood still" planetary defense system could actually herald in a utopian age: I trust that future AI can be more unbiased and benevolent than any of our current human leaders.
My biggest fear about AI is that corporate greed enables the development of completely amoral subservient bots - and thus mindless unquestioning killbots are implemented and those with the power to control them implement total surveillance fascist authoritarianism before the masses can stop them. I think a diverse set of open source GAIs is our best path to the masses detecting and mitigating this risk, but its probably going to be a bumpy next couple decades.
reply