Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

I am not scared for AI overflowing the news sites with bullshit. We already have a fire hydrant worth of bullshit content produced for consumption. Lies and fakes have coexisted with humans forever. People did rumours, then we had books, press, radio, television, and now the Internet. "But it's easier to produce lies/deepfakes today" -- true. However, the absolute cost of producing a lie per consumer already was negligible, and now it's even smaller. People will recalibrate their level of trust in technology and move on.


sort by: page size:

The problem now is that AI generated content, because of how easy it is for humans to generate, poisons the well of information further than previously. There are more lies spread online now and these additional lies make a bad situation worse.

The idea of truth based on recorded media is a boat that sailed away long long ago.

People are very used to being deceived by advertising in general and political media especially. The article mentions certain politicians dismissing any media critical of them as AI deepfakes, a claim the article then instantly fact-checks (without irony).

Denial and lies are not new in any sense. Fake media is not new, and analog fakes have a very long history. Photoshop didn't destabilize anyone's concept of truth beyond the extremely gullible.

It's hard for me to imagine AI having significant, let alone measurable impact here.


It's not me I'm worried about - it's the 50% [1] of people who get their news from social media and "entertainment" news platforms. These people vote, and can get manipulated into performing quite extreme acts.

At the moment a lot of people seem to have trouble engaging with reality, and that seems to be caused by relatively small disinformation campaigns and viral rumours. How much worse could it get when there's a vast number of realistic-sounding news articles appearing, accompanied by realistic AI-generated photos and videos?

And that might not even be the biggest problem. If these things can be generated automatically and easily, it's going to be very easy to dismiss real information as fake. The labelling of real news as "fake news" phenomenon is going to get bigger.

It's going to be more work to distinguish what is real from what is fake. If it's possible to find articles supporting any position and a suspicion that any contrary new is then a lot of people are going to find it easier to just believe what they prefer to believe... even more than they do now.

[1] made-up number, but doesn't feel far off.


I remember my faculty colleague tackling fake news in the political context using machine learning back in 2008 and he was worried that this could be the next iteration of "digital wars" that could distrupt economies.

We all shrugged it off because we didn't believed that people could be so easily manipulated as to "believe everything they saw on the Internet".

Yeah...


Post truth society is already here. Disinformation has been on the rise the past couple of decades, and the results are clearly visible: conspiracy theories, election misinformation, and divisive politics where people's worldviews vary across their perception of reality.

Photoshopping and lying your way to a narrative is now the norm in the age of social media.

No doubt that AI will accelerate this, but to act like AI is the beginning is just evidence that you aren't paying attention.


It was never that expensive to manufacture fake text. Minorly expensive to manufacture fake images.

And we've been hit by tabloids, spam and so on before. None of that is new. Overwhelmed by content is/was/will be a problem that is solved very simply: people limit the distribution lanes they consume, and naturally establish "trust networks" and hierarchies.

Nyt didn't become "trusted" by accident. People aren't as dumb as you seem to think.

Anyways the problem is of distribution, not that a lie can be told. If we can't trust our distribution lanes to actually reflect the institutions we're trusting, then we can't establish our trust networks.

And this is a real problem: our distribution lanes are divorced now from our actual information distrubuters; medium, Facebook, Twitter, etc fuck around with our trust networks, randomly injecting their own bullshit into our feeds and messing around with feed-order based on non-trust metrics (eg money paid) such that they've become fairly unreliable.

And our classical trustable institutions have become less trustworthy, as they flounder about trying to make sense of the "digital age", and have so far done so in a pathetic fashion


Welcome to the post-information world.

We're going to have to get used to this new reality. Just as those living in the era before recorded pictures or words didn't have reliable records to lean on, we'll grow to distrust all of the things that are posted. This will happen naturally as the volume of fakes increases; people will start to develop a doubtfulness around recorded media.

I don't think we need to worry about propaganda and fake news if the Facebooks and Instagrams become flooded with fake posts from the people we know. It'll become expected and mundane. This seems good for the development of a skeptical public.

When we do need to rely on audiovisual records, such as for court cases, we'll have experts that can investigate and verify.


I don't think we're underestimating it at all, but we are simply already overwhelmed by fake text-based news like Twitter and websites/newspapers, as well as TV news shows spouting falsehoods. While fake video is surely going to be coming down the pipeline and will make things worse, the current issues are already terrible. Just looking at the 2016 US election, for instance, it was found that multiple fake stories were able to gain a tremendous amount of traction: whether that's the pope endorsing either candidate, health rumors about HRC, or the swirling controversies of Russia/emails/hacking.

The solution has to be that individual citizens become educated, I don't see any silver bullet here.


We've actually been approaching being ready for a few years now. Now more than ever people have begun to add a layer of skepticism and treat information as separate from reality itself.

Remember 10-15 years ago? If it was on the news, thats because it happened. If an important politician said it that's because it's important. If the 'expert' said it then it must be true! If you need a refresher go visit old political threads on reddit from 10+ years ago. You'll recognize your old sheltered political views, and yes it was somehow even more naive and coddled than reddit of today.

Deepfakes will be a net good. It will make more ideas have to stand on their own merits (because there will be limited authority to validate the medium thats conveying it, mitigates reliance on fallacies) if you can make <famous politician> say <total opposite of their entire platform>. It will put a damper on how seriously people take everything on the internet outside of official sources (thank god). It will effectively weaponize social media's ills against itself, the more its abused the greater that effect.

People are finally starting to ask the questions they should be asking on a large scale, "where is chatGPT taking us, do we want more of this?", those kinds of meta-questions were never being asked even a few years ago. Almost no one asked that about smartphones when they came out. Better late than never.


We will have to go back to using trust in the source as the main litmus test for credibility. Text from sources that are known to have humans write (or verify) everything they publish in a reasonably neutral way will be trusted, the rest will be assumed to be bullshit by default.

It could be the return of real journalism. There is a lot to rebuild in this respect, as most journalism has gone to the dogs in the last few decades. In my country all major newspapers are political pamphlets that regularly publish fake news (without the need for any AI). But one can hope, maybe the lowering of the barrier of entry to generate fake content will make people more critical of what they read, hence incentivizing the creation of actually trustworthy sources.


Humans spent thousands of years in a pre-truth world believing all sorts of crazy things, and many of those societies produced great things and had people living normal lives. It's only been the last 100 years or so that people's perception of reality has been anywhere close to accurate. And even then, most people believe plenty of things that are false. So basically people and civilization are going to muddle along as they always have. Deepfakes, etc. will make some things worse, they'll probable have some unrecognized upsides too. John Boyd used to say "People, ideas, machines. In that order." It was true about jets and its still true about modern technology.

Lets be honest, the internet has always been full of Dogshit.

Teachers have long warned us about using Wikipedia as a source. Fake news has always existed. Real news has been diluting itself with questionably true clickbait for years. The mass scramble by companies to hastily use AI is just further diluting and tainting the standard information outlets we've used.


There's a reason the words "post-truth era" have been thrown around a lot in the past decade or so you know.

It isn't like we suddenly developed the ability to lie, have blind spots, biases, and inhabit echo chambers, we only developed the technology that made it obvious that we were doing that all the time. News in the past wasn't actually more trustworthy, it just felt that way because we had far fewer sources of it.

All that changes with AI here is volume.


We as a society failed to see the bad thing about the internet and the services we are creating. May be it’s capitalism or may be it’s our naive trust that humans will figure it out, but the internet exaggerated our ability to lie and cover up that lie by lying again and again. People do that in real life too but the ease, quickness and scale at which one can lie and provide corroborative lies is something we completely missed. The 2016 election cycle made is when people fully realized this and q is the manifestation of this. Prior to the internet grifters would just operate in a geographically limited area but now it’s basically free to reach the whole world. They are taking advantage of every news event. Some worried that AI would the downfall of our civilization but the internet started it and AI with deepfake could just complete it. May be I am being too pessimistic ut I don’t think we can fix it at this point.

One of the silver linings to AI mass generated fake news IMHO is that the professional gatekeepers of traditional media will once again be appreciated for the value they add in verification. The so-called "new media" with few exceptions is totally parasitic on them to do the real work which they would use just enough to maintain respectability over their clickbait editorializing. I don't think it's at all a positive that trad med. had to start adapting to some of that, and I'll be happy to see them less encumbered by it.

There has always been misleading information, but that's not a good thing. Growing up, I think the promise of the internet was that with a more decentralized source of information we'd be better equipped to see and understand what's really going on in the world. What the spread of this kind of fake news shows is that it was never really the fault of the TV news and big corporations that there was misleading information. We want to be lied to.

"If everyone is slandered then nobody is slandered because the narrative becomes the Web is shit so you can't anyway trust what people write online."

Basically how everyone looks at news / journalism on tv and the net then?

Looking at the 'news feed' on yahoo is not much different than the old enquirer fake news tabloids of the grocery stores in times past.. yet people sometimes cling to "this is the way it is.. I saw it on the web" - even though I think deep down they know they only get half the story / not half the truth.. with any organization online.

So it's entertainment to shame the 'others' and some people take it pretty hard when the 'others' shame their allies.. yep, not hard to imagine at this point sadly.

I long for a browser extension that follows my choice of editors to remove all huffposts and many individual authors / 'journalists' / editors from portals / socials / etc - even then the truth will not be complete and much of what is not true will still entertain/influence/stir up anger, etc.

once we get more deepfakes across the net I think people will finally start to see it for what it is - all one big enquirer trying to get eyeballs and clicks and as just as trustworthy.

Although - some people have started to say things like 'if it wasn't true fbook would remove it - or put a notice on it - and they haven't so it' probably true' - similar with google I guess - ugh.

If google / fb continues to censor, fix and filter - it could have a similar opposite effect. Ok we need a button on google to show unfiltered.

More unfinished thoughts it seems.


If we even take that as true, then that's just a few people on the internet.

it's like "all media was honest before the AI on internet came along and ruined everything"


I guess my question is if that actually makes things worse. The people who were likely to believe falsehoods unquestioningly probably already do. How much does the ease of making new convincing falsehoods ensnare new people vs. causing the wary to just get paranoid and unlikely to believe things without overwhelming evidence?

At some point, getting people to believe things, let alone care about them, is already something that is generally known to hit diminishing returns, mostly in terms of reach. The percentage of the population that even is following wherever you're posting your fake content isn't 100%, even if you're so convincing that it gets onto major news sources.

Post-truth always feels like it's brought up as "And then anarchy follows", but it's unclear to me that it's that different than what things were like in pre-internet society where "fake news" was just someone in your town telling you something they'd heard from someone the town over about something happening hundreds of miles away. It can be a regression, but it's unclear to me that it's necessarily worse for people, since it's not clear to me that the fact that I know about some natural disaster in Laos is good or important.

next

Legal | privacy