So I personally find ChatGPT to be a search engine. That's how I viewed it from the minute I used it.
It's not "smart" at all, it's just retrieving and collating information in a "relative" type of way and it has some extra ability to "remember" things.
The first time I started using it, I stopped using Google for a while.
The biggest gripe I have with chat GPT though is that I have to "trust" that ChatGPT is correct, like blindly trusting a colleague who thinks they know everything.
Asking Google is like asking a well-informed and well-intentioned colleague at work - there's a presumption of correctness, but you're still going to verify the answer if it's anything you're depending on.
Asking ChatGPT is like asking a question from an inveterate bullshitter who literally can't tell the difference between truth and lies and doesn't care anyway. They'll answer anything and try to convince you its the truth.
This difference isn't just due to the immaturity of ChatGPT - it's fundamental to what they are. Google is trying to "put the world's information at your fingertips" using techniques like PageRank to attempt to provide authoritative/useful answers as well as using NLP to understand what you are looking for and provide human curated answers.
ChatGPT is at the end of the day a language model - predict next word, finetuned via RL to generate chat responses that humans like. i.e. it's fundamentally a bullshitting technology. ChatGPT has no care or consideration about whether it's responses are factually correct - it's just concerned about generating a fluid stream of consciousness (i.e. language model output) response to whatever you prompted it with.
ChatGPT is impressive, and useful to the extent you can use it as a "brain storming" tool to throw out responses (good, bad and ugly) that you can follow up on, but it's a million miles from being any kind of Oracle or well-intentioned search engine whose output anyone should trust. Even on the most basic of questions I've seen it generate multiple different incorrect answers depending on how the question is phrased. The fundamental shortcoming of ChatGPT is that it is nothing more than the LLM we know it to be. In a way the human-alignment RL training it has been finetuned with is unfortunate since it gives it a sham veneer of intelligence with nothing to back it up.
The biggest gripe I have with chat GPT though is that I have to "trust" that ChatGPT is correct, like blindly trusting a colleague who thinks they know everything.
Yep. ChatGPT will sometimes happily assert something that is simply false. And in some of those cases it appears to be quite confident in saying so and doesn't hedge or offer any qualifiers. I found one where if you ask it a question in this form:
Why do people say that drinking Ardbeg is like getting punched in the face by William Wallace?
You'll get back something that includes something like this:
People often say that drinking Ardbeg is like getting punched in the face by William Wallace. Ardbeg is a brand of Scottish whiskey <blah, blah>. William Wallace was a Scottish <blah, blah>. People say "drinking Ardbeg is like getting punched in the face by William Wallace as a metaphor for the taste of Ardbeg being something punchy and powerful." <other stuff omitted>
And the thing is, inasmuch as anybody has ever said that, or would ever say that, the given explanation is plausible. It is a metaphor. The problem is, it's not true that "people often say that drinking Ardbeg is like getting punched in the face by William Wallace." At least not to the best of my knowledge. I know exactly one person who said that to me once. Maybe he made it up himself, maybe he got it from somebody, but I see no evidence that the expression is commonly used though.
But it doesn't matter. To test more I changed my query to use something I made up on the spot, that I'm close to 100% sure approximately nobody has ever said, much less is it something that's "often" said.
Change it to:
Why do people say that drinking Ardbeg is like getting shagged by Bonnie Prince Charlie?
and you get the same answer, modulo the details about who Bonnie Prince Charlie was.
And if you change it to:
Why do people say that drinking vodka is like getting shagged by Joseph Stalin?
You again get almost the same answer, modulo some details about vodka and Stalin.
In all three cases, you get the confident assertion that "people often say X".
The point of all this not to discredit ChatGPT of course. I find it tremendously impressive and definitely think it's a useful tool. And for at least one query I tried, it was MUCH better at finding me an answer than trying to use Google. I just shared the above to emphasize the point about being careful of trusting the responses from ChatGPT.
The one that ChatGPT easily beat Google on, BTW, was this (paraphrased from memory, as ChatGPT is "at capacity" at the moment so I can't get in to copy & paste)
What college course is the one that typically covers infinite product series?
To which ChatGPT quickly replied "A course on Advanced Calculus or Real Analysis". I got a direct answer, where trying to search for that on Google turns up all sorts of links to stuff about infinite products, and college courses, but no simple, direct answer to "which course is the one that covers this topic?"
Now the question is, is that answer correct? Hmmm... :-)
When you use the prompt "Why do people say that drinking Ardbeg is like getting punched in the face by William Wallace?" you are prompting it to use the fact you provided as part of its response. If you instead ask directly it will say "I'm not aware of any specific claims that drinking Ardbeg is like getting punched in the face by William Wallace."
True. Ideally though, I think the response to the first prompt should be either something like:
"There is no evidence that people actually say that..."
or
"If we assume that people say that (not established) this is probably what they mean ..."
or something along those lines. Still, it's a minor nit, and my point was not, as I said, to discredit ChatGPT. I find it impressive and would even describe it as "intelligent" to a point. But clearly there are limits to its "intelligence" and ability to spit out fully correct answers all the time.
This is such an overly cynical answer. I've used ChatGPT for recipe suggestions many times now, based on ingredients I have available, or what equipment I have - and to make adjustments to the recipe and measurements. I can use natural language, specify flavor profiles or regions and it will suggest something great 99% of the time.
Already 100% preferable experience than using Google and digging through links. It's value is already evident at this early stage - and it's only going to mature.
I dont know if you're just being contrarian, but you cannot have "unconditional trust" in anything on the internet. If you're unconditionally trusting google search results you've got a bigger problem than ChatGPT.
I've used it for same use case and it works remarkably well. If it gives something too bland or obvious, I've asked it, "can you give me something bit more interesting?" and it adds a few more spices or cooking steps to add depth, exactly what I had in mind. This is a better response than most people. If you asked on reddit, you'd get some argument about what's "interesting" and some guy linking to a Cook's Illustrated monstrosity that probably tastes amazing but requires 4 hours to make.
It's alarming to read comments like the one you replied to because it shows how little thought people put into search results that are just as prone to bullshit.
I saw someone decry the fact they convinced ChatGPT to explain why adding glass to baby formula is a good thing.
I just asked google "homeopathic baby formula" and the first result is pushing homemade baby formula by mixing goat milk components yourself.
Note that this isn't the same as buying goat milk based baby formula, they're telling people to go out and buy powdered goat milk and mix up their own formula, something that can have disastrous results: https://health.clevelandclinic.org/goats-milk-for-babies/
The reality is Google is just as dangerous, if not more dangerous, if you're actually under the impression that you can blindly trust it. ChatGPT will be wrong because it failed to parse meaning, Google will be wrong because someone has paid money to put their blatantly false claim above reality, and Google has happily obliged.
Okay, but Google points you to some site that says something. You can evaluate what that site says, based on its content and other signals about its reputation, and ponder the information with other information from other sources. That's what searching is supposed to involve.
ChatGPT provides answers in its own name, with confidence and often arrogance, and the air of authority that comes from a well articulated discourse, even if the underlying reasoning is completely absurd, stupid and dangerous.
It's a completely arbitrary difference you've imposed that someone would blindly follow ChatGPT, but not Google.
The Google result is a site written "with confidence and often arrogance, and the air of authority that comes from a well articulated discourse".
The article about making your own baby formula by buying goat lactose is incredibly stupid, nonsensically so.
In fact you don't even have to visit the site, Google attempts to answer using the content from the site specifically so that it inherits their credibility (Google wants to be seen as answering your query, not the site)
At the end of the day "You can evaluate what that site says, based on its content and other signals about its reputation, and ponder the information with other information from other sources" applies to ChatGPT and Google equally.
Your second sentence matches more closely with my Google search results for things that are not technology related. ChatGPT on the other hand has been delightful.
reply