Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

How do you verify when every source you could possibly refer to has also been poisoned by AI?


sort by: page size:

You can use AI to fact-check and filter malicious content. (Which would lead to another problem, which is... who fact-checks the AI?)

Unfortunately, it's gonna be pretty difficult to trust any source once all these things multiply.

How do I know you're not just reading some technical subject that was prepared by An AI


How are we going to provide that evidence? And make sure that evidence is actually true, instead of AI generated? Send them a link to a wiki page that has been mutated by an AI bot updating it? ;-)

How? What's to stop the AI (and/or humans in the loop) from spreading falsehoods?

Oh, for sure - I'm not saying don't do anything about it. I'm just saying you should have been treating all information online like this anyway.

The lesson from Gell-Mann is that you should bring the same level of skepticism to bear on any source of information that you would on an article where you have expertise and can identify bad information, sloppy thinking, or other significant problems you're particularly qualified to spot.

The mistake was ever not using "Trust but verify" as the default mode. AI is just scaling the problem up, but then again, millions of bots online and troll farms aren't exactly new, either.

So yes, don't let AI off the hook, but also, if AI is used to good purposes, with repeatable positive results, then don't dismiss something merely because AI is being used. AI being involved in the pipeline isn't a good proxy for quality or authenticity, and AI is only going to get better than it is now.


So how do you know if it's lying to you if you don't do the research?

In the short-term I'm certain that you have the background knowledge to detect when generated content is not quite correct - but going forward as your own personal knowledge atrophies (since you're letting AI do your research) - will you still be able to make sure it's not making everything up?


look at wikipedia. its a fairly accurate source, and if you build a ai around it, you could filter out a fair amout of spurious information.

the concern at that point is if a truly new and miraculoux truth appezrs or if hackers target the wiki space.


Exactly, if a massive data poisoning would happen, will the AI be able to know what’s the truth is there is as much new false information than there is real one ? It won’t be able to reason about it

AI can't fact-check, only correlate sources with its limited understanding.

You had the knowledge to discern whether the info given to you was legitimate. The average user doesn't have that. What if I need to look into something that I have zero knowledge about? When I get a list of sources, I can look through those sources and at least partially determine whether they're reliable. If an AI just spits an answer out, do I always just trust it?

Search engines are getting bad enough about throwing crap information at me. I think AI will just increase that. I prefer that I have the power to sift through the sources. I'm not willing to hand that completely off to AI.


I think you are underestimating how this misinformation will get distributed. You may not fall for an obivous AI post on a spam account, but if you have a trusted news source or individuals you trust, they might end up using AI to help them research and write articles, or get their info from sources who did. Now you are twice or three times removed from the source of the information, and someone you trust is your source. It wasn't even nefarious, it's just how information moves, and AI is now a source for many people.

Maybe you are a hyper diligent fact checker, and the sole originator of carefully considered opinions. Almost no one else is, even the smart people. Political discourse over the past 5 years is proof enough, it is not difficult to influence smart people.


AI is on a completely differnt level than social media. With social media, you can still find information if you do some research. With AI, even supposetly reliable sources will be indistinguishable from fake ones. Scientific researchers are using AI indiscriminately, governments, etc. It will get to the point where when asked to show proof of something, you'll realize there is no way to actually find it.

I'm more concerned that only a small percentage of people will actually check the citations (or the code) before putting the AI's results into circulation. Sort of similar to how people retweet or forward misinformation, thus lending it a personal stamp of human approval. When it comes from someone you know, most people don't check every reference. The scale of the fact-checking problem on social media and in academic papers is already obvious, along with its societal ramifications, but the addition of machines that gleefully spew factual-sounding garbage with false citations just puts that into overdrive.

I'm not afraid of a few people calling out that the AI is wrong. It's much scarier to envision a world where no one even tries to debunk AI-generated false facts. Part of what was so maddening about this conversation with Bing was the idea that it was rewriting history. Without recourse to Archive.org, could I have even proven that it was wrong or that the page hadn't existed? Since it's the kind of a thing a human would be very unlikely to just make up, it sounds more plausible; but then false assertions will be built upon other false assertions, until historical fact is buried under a mountain of hallucinated documents.


You shouldn't have had any trust to begin with; I don't know why we are so quick to hold up humans as bastions of truth and integrity.

This is stereotypical Gell-Mann amnesia - you have to validate information, for yourself, within your own model of the world. You need the tools to be able to verify information that's important to you, whether it's research or knowing which experts or sources are likely to be trustworthy.

With AI video and audio on the horizon, you're left with having to determine for yourself whether to trust any given piece of media, and the only thing you'll know for sure is your own experience of events in the real world.

That doesn't mean you need to discard all information online as untrustworthy. It just means we're going to need better tools and webs of trust based on repeated good-faith interactions.

It's likely I can trust that information posted by individuals on HN will be of a higher quality than the comments section in YouTube or some random newspaper site. I don't need more than a superficial confirmation that information provided here is true - but if it's important, then I will want corroboration from many sources, with validation by an expert extant human.

There's no downside in trusting the information you're provided by AI just as much as any piece of information provided by a human, if you're reasonable about it. Right now is as bad as they'll ever be, and all sorts of development is going in to making them more reliable, factual, and verifiable, with appropriately sourced validation.

Based on my own knowledge of ginseng and a superficial verification of what that site says, it's more or less as correct as any copy produced by a human copy writer would be. It tracks with wikipedia and numerous other sources.

All that said, however, I think the killer app for AI will be e-butlers that interface with content for us, extracting meaningful information, identifying biases, ulterior motives, political and commercial influences, providing background research, and local indexing so that we can offload much of the uncertainty and work required to sift the content we want from the SEO boilerplate garbage pit that is the internet.


I think you overestimate people’s ability to sniff out bad data on the internet.

Also are you suggesting people fact check an AI by asking it if it is correct? That seems absurd.


This is why I prefer things like Perplexity AI, that cite their sources, and give you tools to constrain sources, so that all of the information you get is verifiable and accurate. Of course even these AI tools can have a bias and make mistakes, but you can use them to get a general overview and then quickly check the sources to make sure that the AI's explanation is correct.

Today fake primary sources is mostly due to out-of-context / cherry picked statements and maybe some photoshopped images. It won't be long before we literally will not be able to trust our eyes and ears when we watch a video and it'll quickly get to the point where it will be challenging to know the person you're video conferencing with is who they say they are. At a time where societal trust is so low this technology seems very toxic.

I struggle to have a positive mindset on AI when it's promised use case is to help us navigate all the spam, fraud and abuse that other AIs generate.


What if the AI is manipulating those sources you're researching? Comments like this come off as completely lacking in imagination imo. I don't understand why you need specifics because the world is already full of things trying to manipulate you. Have you never made a bad purchase?

> "Don't trust everything you read on the internet", or "Don't trust everything you hear on social media"

There is a difference. There are places with reputation and backlogs and even with scientific references.

But if AI just gives you a text, there is nothing. Until they fix references, you can’t really use it for anything factual, non-logical information.

next

Legal | privacy