Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

IMO ChatGPT 5 will just think the answer to all problems is going to be "humans hallucinate too?!?!".

Maybe it will be right, but the best part? We'll continue to be reminded of this fact.

Thanks for reminding us all.



sort by: page size:

I’m a bit skeptical. ChatGPT can still hallucinate, generating information that seems correct, but is in fact nonsense. I’m wondering how they are going to deal with that.

Do you think hallucinations will be solved with GPT-5? If so, that would be an amazing breakthrough. If not, it still won't be suitable for medical advice.

I'm afraid hallucinations would be easier to fix than expectations. We are humans after all, right?

But the problem is with the new stuff it hasn't seen, and questions humans don't know the answers to. It feels like this whole hallucinations thing is just the halting problem with extra steps. Maybe we should ask ChatGPT whether P=NP :)

If ChatGPT said this we would accuse it of hallucinating, yet folks would say this with a straight face.

So this is why ChatGPT hallucinates

> Hallucination is a separate problem, which is solved by using fine-tuned models.

They won't solve the main cause of hallucination: prompt has zero connection to generated text other than probability.

ChatGPT do not generate answers, it comes up with something that looks like an answer. There is a good chance it is the answer, but you can't guarantee it.

I believe this particular problem won't be solved, unless researchers teach machines how to reason. But then we would have greater concerns than hallucinations.


No, ChatGPT hallucinates because it has no senses (yet.) Put humans in a sensory deprivation tank and we start to hallucinate too.

Not to take away from your observations, but ChatGPT has been around for close to a year now and LLM hallucinations have been talked about at length basically everywhere. That's far from a new or surprising thing at this point and in fact there is a plethora of mitigation strategies available (mostly centered around additional external mechanisms that find or validate the truth the LLM works with).

Are we going to have to have this discourse for every single instance of GPT doing some semi-novel harm via hallucination?

(Probably, I suppose.)


People are vastly everestimating how unique this problem of hallucinations is.

It seems to me it relies mostly on discounting just how much we've already had to deal with this same problem in humans over the millenia.

The problem of proliferation of bad information might be getting worse, but this isn't native to generative AI. The entire informational ecosystem has to deal with this. GPTs compound the issue, but as far as I can tell, no where near what social media has forced us to deal with.


Hallucinations are an indication that something is missing, surely?

Does chatGPT need a way to verify reality for itself to become truly intelligent ?


They asked ChatGPT and that's what it hallucinated :)

ChatGPT isn't known for it's accuracy though, is it? They coined the term "hallucination" because it is wrong so much.

Why leave hallucinations to chance? ;) The prompt could tell ChatGPT to randomly insert several authoritative sounding but verifiably false facts, to give the students debunking challenges! That solves the problem of GPT-5 being too smart to hallucinate, while still leaving open the possibility of talking rats.

This is why the idea of having humans do this kind of work is complete hype. They still haven't solved the problem of hallucinations.

I genuinely can’t recall people saying “hallucinate” with any regularity - in the context of “AI” - until people started talking about ChatGPT.

So, we’ll see what people say in a year.


You really think we’re gonna be able to solve hallucination before this regurgitation problem? Please.

One of the things about LLM-based AI that concerns me the most is realizing that the average person doesn’t understand that they hallucinate (or even what hallucination is).

I was listening to a debate on a podcast a while ago and one of the debaters kept saying, “Well, according to ChatGPT, […]”—it was incredibly difficult listening to her repeatedly use ChatGPT as her source. It was obvious she genuinely believed ChatGPT was reliable, and frankly, I don’t blame her, because when LLM’s hallucinate, they do so confidently.

next

Legal | privacy