Of course humans also hallucinate, but we didn’t have to take that into account every single time we read a piece of information on the internet. Humans have well-documented cognitive biases. Also, usually, a human’s attempt to deceive has some motivation. With LLMs, the most basic of information they provide could be totally false.
I don't dispute this. What I chafe at is the default-dismissive attitude about any utility of these. "It emits inaccuracies, therefore useless" would invalidate literally every human.
That said, overall utility of anything plummets drastically as reliability goes below 100%. If a particular texting app or service only successfully sent 90% of your messages, or a person you depended on only answered 90% of your calls, you'd probably stop relying on those things.
Those are both excellent points. And I know I’m guilty of being somewhat anti-LLM just because it’s the new hotness and I’m kind of a contrarian by nature. Which is an example of bias right there! And being in academia when it blew up - I do worry about our future cohorts of computer scientists jf academia doesn’t adapt. Which it almost surely won’t. But that’s not a problem inherent to LLMs.
Did you fully read the post you blew up at? I didn’t doubt the usefulness of LLMs. It was a very specific complaint about posting LLM generated content on the Internet without specifying it as the off-the-cuff trash it usually is.
Of course humans also hallucinate, but we didn’t have to take that into account every single time we read a piece of information on the internet. Humans have well-documented cognitive biases. Also, usually, a human’s attempt to deceive has some motivation. With LLMs, the most basic of information they provide could be totally false.
reply