Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

The real question is whether typing "Humans also hallucinate" in every AI discussion on earth contributes something to an AI conversation.


sort by: page size:

Serious q - do you think all AI hallucinates

It should only hallucinate when asked. Humans can do the same but when humans mistake their hallucinations for reality we usually consider it problematic. Especially if the AI is going to be considered a reference which many seem willing to believe.

All AI and all humanity hallucinates, and AI that doesn't hallucinate will functionally obsolete human intelligence. Be careful what you wish for, as humans are biologically incapable of not "hallucinating".

This ist really a succint description: "Sometimes it seems to me that AI people have begun hallucinating as much as the models they’re training."

Why do you share what an AI may have hallucinated?

The rising prevalence of comments like this is concerning, and I’m worried this indicates the existence of a large number people using AI this way that don’t warn us.


The "AI" already hallucinates doesn't it?

But can we trust the hallucinations of the “AI”?

I wonder how many AI hallucinations stem from the fact that nowhere in the prompt it was said that information have to be real. And this cannot be implicitly assumed since humans like fiction in other contexts.

There's more than one problem, and one of them sure is that a "helpful AI" hallucinates.

Of course not. Humans "hallucinate" all the time too.

Why do we hold AI to a higher standard than humans? It's the same with self driving cars. "Oh it had one accident, must not be safe!" and yet humans have 100s of accidents a day.

The main problem here is expectations -- everyone expects the machine to be perfect, and when it's not, it breaks expectations. In the past, machines were generally a lot more accurate, they just didn't do a lot of stuff. Now they they can do a lot more stuff, their accuracy is coming down to human levels, and it throws everyone off.

I don't think we need to fix hallucinations, we need to fix expectations.

Humans don't have 100% perfect recall of every fact they've ever learned, why do we expect AI to?


Well this should make for an interesting conversation, and I suspect we will see lots of these in the coming years:

A biological AI (BAI) writer for Vice hallucinating details about other (hallucinated) BAI's (conservatives) hallucinating about a silicon based AI hallucinating about "reality" (a model derived from BAI hallucinations), discussed by other BAI's on a forum using hallucinated details.

The layers of indirection and recursion society is adding onto the system we live within is starting to get a little alarming....good think I'm probably just hallucinating, and all is actually well here on Planet Earth.


> Persistent and frequent/high-percentage hallucinations?

This describes the current state of AI, doesn't it?


Oh for sure, my problem is primarily one of misinformation. If we managed to solve the hallucination problem such that AI content was more reliable than a reasonably smart human I wouldn’t have a problem with it. But for now AI has a bad habit of telling me that “mayonnaise” has four letter Ns or that you can melt eggs.

> Is this real ?

Could it be just hallucinations of AI?


I get this, but I worry that the metaphorical quality of hallucinate implies belief in consciousness. It may have surface qualities analogous to what we call hallucination in conscious beings, but to backwards intuit because we decide this AI is "hallucinating" implies it's alive worries me.

In particular, that some theory of mind in a concept derived from AI contextual use of "hallucinate" could inform eg disease aeteology. I'd be delighted if our understanding of the mind and consciousness meant we had a theory which related to algorithmic hallucination but I don't believe that's true. We've taken surface effects and (I argue mis-) labelled them. There is no demonstrated link between AI hallucination and why it happens, and what happens in a mind. Why? Because we don't entirely understand what consciousness is. So, we have weak theories, based on experiment and observation. AI models aren't (as far as I know) informing this.

Hallucination (the AI term meaning) is an emergent effect in systems. And that doesn't imply anything about it, and how it relates to real hallucination in real minds.

Grokking is to humans understanding. Making intuitive connections, corollaries, inductive reasoning, abstraction, reapplication. It's strongly tied to cognition. I don't like this use of the term. While machine generated outputs display wildly inappropriate judgement between facts, and instantiate lies as syllogistic consequences it's way way off grokking as humans do.

So ask yourself: do you think pop Sci pundits and journalists and thus politicians understand this? Or do you think they hear "hallucinate" and think it's evidence of belief it's consciousness?


For those (like me) who are wondering:

> In artificial intelligence, a hallucination or artificial hallucination is a confident response by an artificial intelligence that does not seem to be justified by its training data when the model has a tendency of "hallucinating" deceptive data.

https://en.wikipedia.org/wiki/Hallucination_(artificial_inte...


Yes.

The worst part is you don't even know why they hallucinate. Not even the 'AI researchers' can tell you why.

That is how opaque these systems are.


AI hallucinations are kindergarten level compared to what humans do online.

> My standing argument is that they wouldn't hallucinate incorrect responses if they understood anything, the hallucination is when their approximation of what a real response would be falls short.

> These emergent abilities are not actually that, but a result of humans' poor understanding of cognition and communication.

My counterargument is that humans hallucinate too, and often. As just one small example, eyewitness testimony is stupifyingily unreliable. Neurological research and even basic behavioral research shows our brains act as bullshit machines fabricating satisfying narratives constantly. Not to even get into the fact that the word hallucination still has a non-AI meaning, and that dreams exist. As I put see it GPT models simply hallucinate more often and, more noticably, in a different manner than humans. The hallucination frequency need not reach zero, only human equivalent or better , and GPT-4 is already much better than GPT-3.

I agree with everything else you said fully. "True" reasoning machines or not, society will be catastrophically destabilized. Amongst the chaos I expect plentiful of "normal" conventional and nuclear war to go on.

next

Legal | privacy