I’m a bit skeptical. ChatGPT can still hallucinate, generating information that seems correct, but is in fact nonsense. I’m wondering how they are going to deal with that.
Do you think hallucinations will be solved with GPT-5? If so, that would be an amazing breakthrough. If not, it still won't be suitable for medical advice.
But the problem is with the new stuff it hasn't seen, and questions humans don't know the answers to. It feels like this whole hallucinations thing is just the halting problem with extra steps. Maybe we should ask ChatGPT whether P=NP :)
> Hallucination is a separate problem, which is solved by using fine-tuned models.
They won't solve the main cause of hallucination: prompt has zero connection to generated text other than probability.
ChatGPT do not generate answers, it comes up with something that looks like an answer. There is a good chance it is the answer, but you can't guarantee it.
I believe this particular problem won't be solved, unless researchers teach machines how to reason. But then we would have greater concerns than hallucinations.
Not to take away from your observations, but ChatGPT has been around for close to a year now and LLM hallucinations have been talked about at length basically everywhere. That's far from a new or surprising thing at this point and in fact there is a plethora of mitigation strategies available (mostly centered around additional external mechanisms that find or validate the truth the LLM works with).
People are vastly everestimating how unique this problem of hallucinations is.
It seems to me it relies mostly on discounting just how much we've already had to deal with this same problem in humans over the millenia.
The problem of proliferation of bad information might be getting worse, but this isn't native to generative AI. The entire informational ecosystem has to deal with this. GPTs compound the issue, but as far as I can tell, no where near what social media has forced us to deal with.
Why leave hallucinations to chance? ;) The prompt could tell ChatGPT to randomly insert several authoritative sounding but verifiably false facts, to give the students debunking challenges! That solves the problem of GPT-5 being too smart to hallucinate, while still leaving open the possibility of talking rats.
One of the things about LLM-based AI that concerns me the most is realizing that the average person doesn’t understand that they hallucinate (or even what hallucination is).
I was listening to a debate on a podcast a while ago and one of the debaters kept saying, “Well, according to ChatGPT, […]”—it was incredibly difficult listening to her repeatedly use ChatGPT as her source. It was obvious she genuinely believed ChatGPT was reliable, and frankly, I don’t blame her, because when LLM’s hallucinate, they do so confidently.
Maybe it will be right, but the best part? We'll continue to be reminded of this fact.
Thanks for reminding us all.
reply