Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

> What I am extremely worried about is that generative AI will pollute the information environment so completely that society will cease to function effectively.

We are not functioning correctly already - given the polarization we see today, specially with politics. Most people are completely misinformed even on basic concepts that used to be taught at schools.

Today this is being accomplished by a small group of individuals, amplified by bots (and then, once an idea spreads sufficiently, it's self-sustaining). AI will make it way, way worse, as you correctly point out.

Now, if the lake is poisoned too much, people will avoid it. Maybe it will destroy a bunch of communication channels, such as social networks.



sort by: page size:

> I think people underestimate the risk that generative AI will, ultimately, also underwhelm

That’s the true danger of AI.


> The text makes it sound like the biggest danger of AI is that it says something that hurts somebodies feelings. Or outputs some wrong info which makes somebody make the wrong decision.

Which already exists in abundance on the internet and with web searches. It sounds like something big corporations worry about to avoid law suits and bad publicity.

> I think the biggest danger these new AI systems pose is replication.

That or being used by bad actors to flood the internet with fake content that's difficult to distinguish from genuine content.


> My worry is that billions of people will be killed (probably all at the same time) by AI in the future

Science fiction.


> our world is close to being completely upheaved by intelligent machines, in all areas of intellectual pursuit

I could see myself losing the passion for software engineering and design if an AI can do it better. That would have to be a general AI, and hopefully another couple of decades away.

I wonder if I could enjoy movies or books written by an AI. Scary to think about the psychological manipulation it would be capable of, especially if it lives inside a Google or Facebook datacenter.


> We can deal with the implications of dangerous AI if and when it becomes a problem.

What makes you assume that? We haven't been able yet to deal with the repercussions of globalised social media, we don't even completely understand its impacts. Or dealt with the impact of climate change.

AI seems like a much more encompassing and transformative technology than social media, what makes you assume we will be able to deal with its problems in a timely fashion when they inevitably occur? We might as well not be able to, and as usual, unintended consequences will follow.

> Fear of the unknown shouldn’t be allowed to stop scientific progress.

Scientific progress at any cost while being irresponsible about major consequences of it shouldn't be allowed either, it needs to be a balancing act, just pushing forward without even assessing the risks is a stupid game.


> Personally I'm not worried about the Terminator scenario, both because I don't see AI going in that direction at all,

Not so much the terminator scenario, which was more a plot device to tell a time travel story with killer robots. But that a powerful enough AI(s) might do something unanticipated which is very harmful, and we would have little ability to control it. For now, it's more about how people might abuse AI or how it might disrupt society in ways we haven't predicted. Similar to the negative effects of social media. If the internet gets flooded with fake news that we have a hard time telling apart from the real thing, then it becomes a big problem.


> but this new level of anonymity and lack of connection when it comes to getting information is NOT good for us

I really think this is the kind of existential risk that people don’t think about when they think about AI existential risk.

The whole thing either rules the world or comes down when humans no longer know how to cooperate.


> There are just as many reasons to fear the lives lost if we do slow down the progress of AI tech (drug cures, scientific breakthroughs, etc).

While I’m cautious about over regulation, and I do think there’s a lot of upside potential, I think there’s an asymmetry between potentially good outcomes and potentially catastrophic outcomes.

What worries me is that it seems like there are far more ways it can/will harm us than there are ways it will save us. And it’s not clear that the benefit is a counteracting force to the potential harm.

We could cure cancer and solve all of our energy problems, but this could all be nullified by runaway AGI or even more primitive forms of AI warfare.

I think a lot of caution is still warranted.


> The current crop of AIs will be very useful but it won't lead to the scary AGI people predict.

I argue that this won't matter. I think Generative AIs will be much scarier than any sci-fi AGI. And this is because while sci-fi AGI apocaliptic scenarios involve the AGI seeking to exterminate humanity usually as a form of revenge (see The Matrix, The Orville and many others) or as part of an optimization, scenarios involving Generative AIs simply involve humanity destroying itself, and these scenarios are already very plausible.

Prepare for the complete destruction of objectivity, the onslaught of spam, grifts and fake news at an unprecedented scale and a flood of security vulnerabilities unlike any other. This will be the annihilation of interpersonal trust and of knowledge.


> The only danger is that stupid people might get their brains programmed by AI rather than by demagogues which should have little practical difference.

This may be the best point that you've made.

We're already drowning in propaganda and bullshit created by humans, so adding propaganda and bullshit created by AI to the mix may just be a substitution rather than any tectonic change.


> the average person will not be able to know what is true anymore

We barely held things together as society without AI unleashing cognitive noise at industrial scale.

Somehow we must find ways to re-channel the potential of digital technology for the betterment of society, not its annihilation.


> It's about the risk that an AI smarter than us will be made and built to pursue some goal without caring about our well-being.

We already have those and they're called corporations. They've done significant real damage to the world already and they are still working hard to do more damage.

It makes little sense to me to focus on this potential future problem when we haven't even agreed to deal with the ones that we already have.


> There is a world in which generative AI, as a powerful predictive research tool and a performer of tedious tasks, could indeed be marshalled to benefit humanity, other species and our shared home. But for that to happen, these technologies would need to be deployed inside a vastly different economic and social order than our own, one that had as its purpose the meeting of human needs and the protection of the planetary systems that support all life.

Unfortunate but that comment seems to be quite on point.


> On the other hand, to focus on one particular risk, there are compelling arguments made by experts in the field of artificial intelligence (e.g. Stuart Russell) that say that we face real risks based on the current trajectory of the technology.

To the extent that these concerns are not overblown, one way to avoid the dangers of advancing ML would be to stop all the effective altruists [1] in the field haring along at building it irresponsibly.

The people preaching about the potential future dangers of AGI are the same ones pouring money into ML; if they actually meant what they said, they'd pour that money into their opponents.

The current practical dangers of ML are concerned with enforcing social inequalities and concentrating power in the hands of unethical actors. The fact that many of the unethical actors claim utopian reasons for their reckless disregard of consequences is not a reason to believe them.

[1] https://twitter.com/emilymbender/status/1556691543850831872


>So I can't see the idea that AI will be safe by default other than wishful thinking.

I... don't think anyone is arguing that AI is safe in any way? I mean, the pattern matching tools we call AI now are already deployed in very dangerous weapons.

My point is not that AI is going to be safe and warm and fuzzy, or even that it won't be an existential threat. My point is that we're already facing several existential threats that don't require additional technological development to destroy society, and because the existing threats don't require postulating entirely new classes of compute machinery, we should probably focus on them first.

There are still enough nuclear weapons in the world for a global war to end society and most of us. A conflict like we saw in the first half of the 20th century using modern weapons would kill nearly all of us, and we are seeing a global resurgence of early 20th century political ideas. I think this is the biggest danger to the continuation of humanity right now.

We haven't killed ourselves yet... but the danger is still there. we still have apocalyptic weapons aimed and ready to go at a moment's notice. We still face the danger of those weapons becoming cheaper and easier to produce as we advance industrially.


> I personally am worried about AI partly because I am very cynical about how corporations will use it

This is the more realistic danger: I don't know if corporations are intentionally "controlling the narrative" by spewing unreasonable fears to distract from the actual dangers: AI + Capitalism + big tech/MNC + current tax regime = fewer white- & blue-collar jobs + increased concentration of wealth and a lower tax base for governments.

Having a few companies as AI gatekeepers will be terrible for society.


>Whenever I read prominent figures talking about the dangers of AI, I feel like they are missing the mark. I don't think the imminent dangers of AI are self-conscious machines rebelling and deciding to kill people, but a much more subtle and nuance problem.

People are plenty aware of this. It is just that they are not always talking about the most imminent dangers. Sort of like how people talk about global warming dangers down the road, even though they are aware of e.g. cancer today.


>> spam the internet with thousands and thousands of souless generated videos

Unfortunately, that's already happening.

https://www.youtube.com/watch?v=w7oiHtYCo0w

From what I can see, YouTube has done quite a bit of work to cleanup YouTube Kids, but it's kind of an arms race.

There's this worrying issue in AI ethics discussions where most people seem to assume the problems and dangers of AI are still off in the future, that as long as we don't have the malicious AGI of sci-fi stories, then AI and "lesser" algorithmically generated content isn't harming society.

I think that's not true at all. I think we've seen massive damage to social structures thanks to algorithmic feeds and generated content, already, for years now. I don't think, just because they aren't necessarily neural-network-based, doesn't make them something to not worry about.

So I don't see AI as a particularly, different, worrisome problem. It's an extension of an already existing, worrisome problem that most people have ignored beyond occasionally complaining about election results.


> I have yet to meet a serious AI researcher who worries about AI ending the human race.

As an industry practitioner of machine learning / data science, I believe AGI poses a genuine risk to humanity.

Having said that, what I do for a living is of little relevance. People are pretty terrible at predicting the future (see late 19th / early 20th century predictions of year 2000, with food replaced by pills, etc). Unless someone has put enough thought and research into it, their predictions about the future of civilization are likely to be worthless regardless of their academic credentials.

next

Legal | privacy