I am one of these ninnies I guess, but isn't it rational to be a bit worried about this? When we see the deep effects that social networks have had on society (both good and bad) isn't it reasonable to feel a bit dizzy when considering the effect that such an invention will have?
Or maybe your point is just that it's going to happen regardless of whether people want it or not, in which case I think I agree, but it doesn't mean that we shouldn't think about it...
>Allowing social media to grow unchecked without first understanding the risks has had disastrous consequences…
Not gonna argue with that, but I’m having trouble imagining the alternative. Somehow we would have understood the risks without allowing the growth and observing the consequences? That seems unlikely, considering how surprised we all seemed when the consequences occurred.
With AI we seem to have a lot more noise around imagining consequences, but in my mind there’s no reason that would correlate with completeness or accuracy of the predictions. There will be lots of very bad, and lots of very good, consequences that nobody can currently imagine.
> There are just as many reasons to fear the lives lost if we do slow down the progress of AI tech (drug cures, scientific breakthroughs, etc).
While I’m cautious about over regulation, and I do think there’s a lot of upside potential, I think there’s an asymmetry between potentially good outcomes and potentially catastrophic outcomes.
What worries me is that it seems like there are far more ways it can/will harm us than there are ways it will save us. And it’s not clear that the benefit is a counteracting force to the potential harm.
We could cure cancer and solve all of our energy problems, but this could all be nullified by runaway AGI or even more primitive forms of AI warfare.
> Humans are insanely social creatures and if someone is very lonely would an AI be enough to objectively make their life better?
Probably, but the ease of using this would probably lure people in that are otherwise able to form social connections and it will inevitably replace at least some of their human connections. I assume you'd agree that the latter case is not good and my presumption is that the latter case vastly outnumbers the former, so this is probably not a good thing in the long run.
> On the other hand, to focus on one particular risk, there are compelling arguments made by experts in the field of artificial intelligence (e.g. Stuart Russell) that say that we face real risks based on the current trajectory of the technology.
To the extent that these concerns are not overblown, one way to avoid the dangers of advancing ML would be to stop all the effective altruists [1] in the field haring along at building it irresponsibly.
The people preaching about the potential future dangers of AGI are the same ones pouring money into ML; if they actually meant what they said, they'd pour that money into their opponents.
The current practical dangers of ML are concerned with enforcing social inequalities and concentrating power in the hands of unethical actors. The fact that many of the unethical actors claim utopian reasons for their reckless disregard of consequences is not a reason to believe them.
Many people throw out the word ‘theoretical’ as some kind of way to imply that something ‘isn’t real’ or worth worrying about. Something might seem implausible until it happens. Gravity waves were once ‘only’ theoretical after all :)
There are plenty of AI dystopian predictions, and many of these are possible and impactful. Many of these theories are based on solid understandings of human nature. It is hard to know how technology will evolve, but we have some useful tools to make risk assessments.
> The dangers of corporations being in control aren’t.
There have been plenty of dystopian predictions about corporate control too. I take the point that we’ve seen them over and over throughout history.
> You don't need fancy AI to destroy the fabric of society.
You don't, but you can do it a million times worse in just a fraction of the time. Considering how trivial Facebook can worsen our lives just imagine how effective it could be. That is the risk you are taking. Before we have even time to course correct we've lost everything we know.
> The more appropriate path is developing countermeasures or counter-technology.
If we get good AI the countermeasures are not going to be great. They alone could easily be dystopian. Since, captchas etc. won't be good enough. The only way to TRY and prevent abuse is to ID every action and tie it to a physical person. You could try to make that anonymously but yeah, like that is going to happen. And that is the easy part.
> It boggles my mind how anyone can think otherwise.
Some AI dangers are certainly legitimate - it's easy to foresee how an image recognition system might think all snowboarders are male; or a system trained on unfair sentences handed out to criminals would replicate that unfairness, adding a wrongful veneer of science and objectivity; or a self-driving car trained on data from a country with few mopeds and most pedestrians wearing denim might underperform in a country with many mopeds and few pedestrians wearing denim.
But other AI dangers sound more like the work of philosophers and science fiction authors. The moment people start predicting the end of humans needing to work, or talking about a future evil AI that punishes people who didn't help bring it into existence? That's pretty far down in my list of worries.
>> Humans are insanely social creatures and if someone is very lonely would an AI be enough to objectively make their life better?
> Probably, but the ease of using this would probably lure people in that are otherwise able to form social connections and it will inevitably replace at least some of their human connections. I assume you'd agree that the latter case is not good and my presumption is that the latter case vastly outnumbers the former, so this is probably not a good thing in the long run.
Exactly, and I think that's a case of a general social/psychological problem that technology has been relentlessly hammering.
People evolved to need certain things that unavoidably require work that's not always easy. However, the modern-day technologist's impulse to provide easy but imperfect substitutes ultimately makes people worse off, because it removes the motivating factors that push them to do what they really need to do, leading more and more people to get stuck in pathological states as the chose technology's easy dead-ends.
> What I am extremely worried about is that generative AI will pollute the information environment so completely that society will cease to function effectively.
We are not functioning correctly already - given the polarization we see today, specially with politics. Most people are completely misinformed even on basic concepts that used to be taught at schools.
Today this is being accomplished by a small group of individuals, amplified by bots (and then, once an idea spreads sufficiently, it's self-sustaining). AI will make it way, way worse, as you correctly point out.
Now, if the lake is poisoned too much, people will avoid it. Maybe it will destroy a bunch of communication channels, such as social networks.
> I'm not sure how much I should worry about AI, but I'm certain that for the people who are worried about it, it's because they're convinced it's a threat, and they aren't having fun.
I've met a lot of these people and regardless of how you feel about the sincerity of their convictions, they are definitely having fun.
> But humans really do appreciate and value the presence and interactions with other humans, and at the end of the day those interactions will serve as guardrails against the complete en- and bull-shittification that AI promises to accelerate.
What I'm worried about is that people may value the interests of a super-persuasive AI more than other humans'. How many humans have fallen victim to cult leaders? How many humans believe in Q-Anon? Q-Anon and stuff like that prove that purely through memetic engineering millions of people can be subverted with respect to their natural interests. How far could a 100T parameter model get with that specific goal?
> Safety for AI is like making safe bullets or safe swords or safe shotguns.
This seems like a very confused analogy for two reasons. One, there's a reason you aren't able to get your hands on a sword or shotgun in most places on earth, I'd prefer that not to be the case for AI.
Secondly, AI is a general purpose tool. Safety for AI is like safety for a car, or a phone, or the electrity grid. it's going to be a ubiqutous background technology, not merely a tool to inflict damage. And I want safety and reliablity in a technology that's going to power most stuff around me.
> rich and powerful people using the technology to enhance their power over society.
We don't know the end result of this. This could not be in the interest of power. What if everyone is out the job? That might not be such a great concept for the powers that be, especially if everyone is destitute.
Not saying it's going down that way, but it's worth considering. What if the powers that be are worried about people being out of line and retard the progress of AI?
> even long before we get to whether or not AI runs society
For the life of me I can't see how, or why, we would give AI this sort of power. Look at the power of the algorithm in social networks and see how that has taken shape. This embrace of the machine just seems so short-sighted.
> The text makes it sound like the biggest danger of AI is that it says something that hurts somebodies feelings. Or outputs some wrong info which makes somebody make the wrong decision.
Which already exists in abundance on the internet and with web searches. It sounds like something big corporations worry about to avoid law suits and bad publicity.
> I think the biggest danger these new AI systems pose is replication.
That or being used by bad actors to flood the internet with fake content that's difficult to distinguish from genuine content.
> We can deal with the implications of dangerous AI if and when it becomes a problem.
What makes you assume that? We haven't been able yet to deal with the repercussions of globalised social media, we don't even completely understand its impacts. Or dealt with the impact of climate change.
AI seems like a much more encompassing and transformative technology than social media, what makes you assume we will be able to deal with its problems in a timely fashion when they inevitably occur? We might as well not be able to, and as usual, unintended consequences will follow.
> Fear of the unknown shouldn’t be allowed to stop scientific progress.
Scientific progress at any cost while being irresponsible about major consequences of it shouldn't be allowed either, it needs to be a balancing act, just pushing forward without even assessing the risks is a stupid game.
>However, I think it's incredibly important to reject the reframing of "AI safety" as anything other than the existential risk AGI poses to most of humanity.
Narrowing the concept of AI safety to AGI existential risk seems weird to me.
> - AIs becoming sentient and causing harm for their own ends.
I believe this is actually not going to happen, but I think something like it will happen: people will trust it enough to delegate to it.
So AI won't be sentient, but because people find it good enough and hook it up to some decision, or process or physical system. And that can cause harm.
This is just like tesla autopilot. People will begin to trust it and let it take over. But smart people realize they shouldn't use it in ALL situations. Exceptional circumstances like deep snow, pouring rain, a really curvy road, in a parade of people or in a dangerous part of town - that might not be a good time to delegate.
I am one of these ninnies I guess, but isn't it rational to be a bit worried about this? When we see the deep effects that social networks have had on society (both good and bad) isn't it reasonable to feel a bit dizzy when considering the effect that such an invention will have?
Or maybe your point is just that it's going to happen regardless of whether people want it or not, in which case I think I agree, but it doesn't mean that we shouldn't think about it...
reply