Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

You shouldn't have had any trust to begin with; I don't know why we are so quick to hold up humans as bastions of truth and integrity.

This is stereotypical Gell-Mann amnesia - you have to validate information, for yourself, within your own model of the world. You need the tools to be able to verify information that's important to you, whether it's research or knowing which experts or sources are likely to be trustworthy.

With AI video and audio on the horizon, you're left with having to determine for yourself whether to trust any given piece of media, and the only thing you'll know for sure is your own experience of events in the real world.

That doesn't mean you need to discard all information online as untrustworthy. It just means we're going to need better tools and webs of trust based on repeated good-faith interactions.

It's likely I can trust that information posted by individuals on HN will be of a higher quality than the comments section in YouTube or some random newspaper site. I don't need more than a superficial confirmation that information provided here is true - but if it's important, then I will want corroboration from many sources, with validation by an expert extant human.

There's no downside in trusting the information you're provided by AI just as much as any piece of information provided by a human, if you're reasonable about it. Right now is as bad as they'll ever be, and all sorts of development is going in to making them more reliable, factual, and verifiable, with appropriately sourced validation.

Based on my own knowledge of ginseng and a superficial verification of what that site says, it's more or less as correct as any copy produced by a human copy writer would be. It tracks with wikipedia and numerous other sources.

All that said, however, I think the killer app for AI will be e-butlers that interface with content for us, extracting meaningful information, identifying biases, ulterior motives, political and commercial influences, providing background research, and local indexing so that we can offload much of the uncertainty and work required to sift the content we want from the SEO boilerplate garbage pit that is the internet.



sort by: page size:

You had the knowledge to discern whether the info given to you was legitimate. The average user doesn't have that. What if I need to look into something that I have zero knowledge about? When I get a list of sources, I can look through those sources and at least partially determine whether they're reliable. If an AI just spits an answer out, do I always just trust it?

Search engines are getting bad enough about throwing crap information at me. I think AI will just increase that. I prefer that I have the power to sift through the sources. I'm not willing to hand that completely off to AI.


Unfortunately, it's gonna be pretty difficult to trust any source once all these things multiply.

How do I know you're not just reading some technical subject that was prepared by An AI


> "Don't trust everything you read on the internet", or "Don't trust everything you hear on social media"

There is a difference. There are places with reputation and backlogs and even with scientific references.

But if AI just gives you a text, there is nothing. Until they fix references, you can’t really use it for anything factual, non-logical information.


Today fake primary sources is mostly due to out-of-context / cherry picked statements and maybe some photoshopped images. It won't be long before we literally will not be able to trust our eyes and ears when we watch a video and it'll quickly get to the point where it will be challenging to know the person you're video conferencing with is who they say they are. At a time where societal trust is so low this technology seems very toxic.

I struggle to have a positive mindset on AI when it's promised use case is to help us navigate all the spam, fraud and abuse that other AIs generate.


Okay so high production value is no longer an easy marker of high trustworthiness for you.

>“I'm upset, not because you have deceived me on this, but because I can no longer believe in you.” — Friedrich Nietzsche

But just why is AI information slop so repulsive in its current form that it is pushing people away from the internet?

It can't just be because there was little consideration or applied thought and we feel we deserve more of that. If the content was highly accurate plus or minus a few extra fingers on the stand-ins where is the problem?

I think there is less deception intended in an AI-slop-go-brrrr shop than in a typical highly produced man-made documentary. I'm repulsed by both sooner or later but I have more faith in the unrestricted AI tech long term. A lot is going to change and need to get used to.


Oh, for sure - I'm not saying don't do anything about it. I'm just saying you should have been treating all information online like this anyway.

The lesson from Gell-Mann is that you should bring the same level of skepticism to bear on any source of information that you would on an article where you have expertise and can identify bad information, sloppy thinking, or other significant problems you're particularly qualified to spot.

The mistake was ever not using "Trust but verify" as the default mode. AI is just scaling the problem up, but then again, millions of bots online and troll farms aren't exactly new, either.

So yes, don't let AI off the hook, but also, if AI is used to good purposes, with repeatable positive results, then don't dismiss something merely because AI is being used. AI being involved in the pipeline isn't a good proxy for quality or authenticity, and AI is only going to get better than it is now.


Trust? It's worse than that.

I don't believe AI output. My eyes glaze over and I scroll past it. Anything that looks AI formatted is branded with disbelief and a cognizant awareness that it produces unverifiable shades-of-grey.

Are we really going to live in a world where a blackbox program that does not produce meaningfully deterministic results and cannot be examined, is regarded as a source of truth?

If AI takes in a poisoned database and spreads it, who would know? The AI leaving out vital information is just as dangerous, though we are used-to that problem.

And that's before we even hit the accuracy of it's word predictions..

I don't understand why people are keen on 'being friends' with the grim reaper parrot.


This is going to sound flippant, but I'm serious: in a world of nonsense, something that generates nonsense (ai) is a fantastic tool.

The issue is our acceptance of information as if it were true, as if misleading ideas were not monetisable, as if we can outsource the basis for why we make decisions to an external authority. Hardly anyone verifies anything. Most simply accept whatever they are told. Deep skepticism and empiricism are used by very few - instead we have been taught to trust authoritative sources (media, academia) which can be both well meaning and wrong.

Anyway, skepticism and personal verification is the best answer I have to the whole story saga of how to determine truth from lies. This issue is under an especially bright spotlight thanks to ai.

I'm pessimistic over whether many will be prepared to 'verify better' in the future. Unfortunately, I suspect things will have to get a lot worse before we start to learn. It seems that ai can create compelling content, that will be tailored to each individual - who could resist 24/7 pandering to one's predilections and biases?


I find trusting AI to detect misinformation to present essentially the same problem as using AI to spread it - both cases require implicit trust in an untrustworthy system, and removes the impetus to educate people and have them practice critical thinking and skepticism (as opposed to cynicism and contrarianism) themselves.

The most effective system I've seen so far, ironically (given who now runs the site) is the community notes on Twitter. But even that gets gamed by activists and bad faith actors.


I think the problem you are describing is not only about AI, it is about trusting your sources of information. I can happen with content from humans.

I think you overestimate people’s ability to sniff out bad data on the internet.

Also are you suggesting people fact check an AI by asking it if it is correct? That seems absurd.


Many of us has already experienced reading an article we believed to be of human origin, but in fact was written by an AI. As many posts on Twitter and the Fediverse has shown us, AIs can be hilariously inaccurate – it will happily tell us what it thinks we want to hear and invent stuff as it goes.

In this environment, where it gets harder to discern the veracity of information online, be it from newspapers, magazines and other types of information outlets, establishing trust becomes increasingly valuable.

Will this fact make us seek out more personal content from sources we can put a face and a name to? A personal web of trust that sharpens the line between content made by humans and AIs?

It might be to early to call it, but having pondered on these questions for a little while, it made me realize that my solution to this sea-change is to double down on collecting channels of information that verifies ownership, like personal websites, preferably on a domain name owned by that very person.

Maybe we could even make some kind of pact, a code of honor, a personal promise of integrity, that guarantees a site to contain 100% human generated content?


They're not going to fact check, they're simply going to think "huh, could be AI" and that will change the way we absorb and process information. It already has. And when we really need to know something and can't afford to be wrong, we'll seek out high trust sources. Just like we do now, but more so.

And of course some large cross section of people will continue to be duped idiots.


I'm not trusting an AI to vet my information stream.

So when the various AI start publishing questionable stuff and add the published stuff into each others corpus to draw on, they all learn from each other to generate even more nonsense?

Eventually you wont be able to trust information from any unverifiable/non-human source and information-technology will die due to mistrust.

When we were meant to automate the boring stuff, this isn't the end-point we were hoping for.


At this point with WaveNet, GPT-3, Codex, DeepFakes and Dall-E 2, you cannot believe anything you see, hear, watch, read on the internet anymore as an AI can easily generate nearly anything that can be quickly believable by millions.

The internet's own proverb has never been more important to keep in mind. A dose of skepticism is a must.


At first I thought the article was going to be about human-led misinformation but I wonder whether with both hallucinations and human-fed misinformation (AI-helped or not!) whether we can use AI to fact/self check results (both AI generated and human ones) and prompt us about potential misinformation and link to relevant sources? That way AI could actually help solve the trust issue.

It's too late already if you want to just scrape random horseshit on the internet. There will be real money in large expert generated data sets. AI is also a potential epistemology nightmare. It can cement bad knowledge and bury new more up to date knowledge in a sea of bullshit.

How do you verify when every source you could possibly refer to has also been poisoned by AI?
next

Legal | privacy