It is not a fluke that ChatGPT becomes popular in a time when alternative facts wins elections. A time when prominent business people look down at education and talks about smart people being dangerous. A time when you have to sit down with your dad for an hour to explain that the post he read on Facebook is not really a news article.
I am not sure it is up to OpenAI to solve all these problems. I don't think OpenAI are doing worse than anybody else.
What I personally dislike about ChatGPT right now is that it seems to have more and more difficulty to actually stay on the context of the chat. It has become a QnA.
Secondly. Hidden ads. It even advocates for Azure bs that has even been removed from Azure. Even on data ChatGPT was trained on. Is that just by chance?
I think the bigger issue is trust. The chat it is no longer trying to return objective information. “OpenAI” is letting companies pay to have it promote their content—is that accurate?
I've posted this into another thread as well, from Sam Altman, CEO of OpenAI, two months ago, on his Twitter feed:
"ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness. it's a mistake to be relying on it for anything important right now. [...] fun creative inspiration; great! reliance for factual queries; not such a good idea." (Sam Altman)
A colleague said that they don't like ChatGPT, because it basically gives the same info as the top response on Google. This was a software dev.
I think there's a fundamental misunderstanding of what these models are all about. I really question/fear how the average person uses it, since OpenAI might be incentivized to push it towards that context.
No evidence, but I feel it's harder to get it to converse about topics, and avoid lists. It seems to love making lists now.
ChatGPT is arguably a general intelligence, but it hasn't escaped AFAIK -- it's still contained within OpenAI infrastructure, and OpenAI can easily pull the plug on it.
I'm guessing it's more likely that OpenAI just want's the data that hundreds of millions of users are searching for vs. the much smaller group of 'technical' users niche users who are already using ChatGPT.
Was it a right choice to make ChatGPT accessible by the public?
To a person who is not an expert at prompting LLMs, ChatGPT is basically a shitty version of Bing Chat (aka Copilot). Especially the free version - it's an outdated model which cannot search the internet (or does it strictly worse than Bing Chat).
Why does OpenAI pay for access to a shitty version of Bing Chat?
There's only one possible reason for this: raising money at a very high valuation. They burned through hundreds of millions dollars to show VC that they have 100+ M users (and growing rapidly!) to raise at valuation ~$100B.
Before chatgpt people were putting stuff online and were fine with their lot. OpenAI found something else to do with that data, and it feels like people are upset they aren't getting a cut, even if their economics haven't changed.
I can see that potentially having info available on chat gpt would mean less page views for content that exists to show ads, but I'm not sympathetic to that business model.
People who have models other than trying to get views for ads are just as incentivized to create new content as they were before.
ChatGPT is just a user interface over an underlying set of ai products that OpenAI has and that third parties use through an API to offer AI enhanced products. e.g. ChatGPT is banned but Bing and every other Microsoft product that integrates openai gpt model works perfectly. That move was just dumb and I suspect also political
It is OpenAI's responsibility to advertise ChatGPT correctly. They are letting the misinformation / misunderstanding about spread because it's helping them sell their tool.
ChatGPT is not a fact database, they should be very very clear about it everywhere. But they are not doing it. Or at least not doing it as loudly as they should intentionally.
OpenAI's entire business model depends on users thinking there's nothing that ChatGPT can't do, not the opposite. If anything they'd like to elicit fear about AGI rather than show uncertainty. This would be like asking Meta to make clear when an image has been doctored, or asking Google to make their ads look less like real search results.
These people are able to find customers that OpenAI might not have found and bring that revenue to OpenAI. Looking at the ads I see for these products on twitter, I suspect the average user of these products is so non-technical as to be unaware that they could just use ChatGPT.
Even the standard ChatGPT became way shittier with time. At one point clicking on a new chat appended a davinci parameter to the query url.
Made me think OpenAI might be bait/switching on what models are used behind the scene. But without any conclusive evidence (how do you benchmark ChatGPT itself), I'm just gonna wear this tinfoil hat about what's happening really.
Earlier replies to this thread note ChatGPT's lack of reliability, but fail to mention the much scarier aspect of producing unnervingly natural content especially after, not that hard to pull off if the money is there, task-specific fine-tuning.
It's not just OpenAI that has the potential of misusing this technology. I could not keep the goosebumps away through the whole "onboarding" flow to just say hi to the thing, not to mention the "safety and production best practices" for any organization interested to build on top of it.
ChatGPT is a big and scary deal. I don't think we as a species can responsibly handle the implications. To say that we are living through interesting times is an underestimation.
This article is a little bit of a red hearing. OpenAI is not apple, in the sense that they are not great at building user facing products. They are great at building the world's best AI models. They've known this since the inception of the GPT models, the only way you could have accessed these models is via API.
Later last year, we saw the release of GPT's text-davinci-003, and in an attempt to showcase this new model to the research community, they launched ChatGPT.
I think what we are seeing now is that chatGPT is best when it is close to existing applications. For example what 14 year old is using the chatGPT app vs the Snapchat AI Chat which uses the API internally.
The recent drop in usage could likely be attributed to such preferential shifts, further compounded by the timing of school holidays.
I think OpenAI will do fine, but I have doubts about ChatGPT as a product. It’s just a chat UI, and I’m not convinced the UI will be chat 3 years from now.
Personally, the chat UI is the main limiting factor in my own adoption, because a) it’s not in the tool I’m trying to use, and b) it’s quicker for me to do the work than describe the work I need doing.
I am not sure it is up to OpenAI to solve all these problems. I don't think OpenAI are doing worse than anybody else.
What I personally dislike about ChatGPT right now is that it seems to have more and more difficulty to actually stay on the context of the chat. It has become a QnA.
Secondly. Hidden ads. It even advocates for Azure bs that has even been removed from Azure. Even on data ChatGPT was trained on. Is that just by chance?
reply