I think that’s a side effect of each time they release a version to compete with ChatGPT and it’s not as good so they have to at the same time announce a few version that is suppose to be better than ChatGPT and each time it’s not overall so they have to announce a new version. Think this will continues for a while especially since non OpenAI companies have access to much less free data troves than they did not that everyone realizes how valuable that data is. But that even aside other companies even Microsoft in my opinion with full chatgpt access implement it much more poorly . I imagine Apple will suffer a similar fate for a while.
In light of this, Apple’s choice to integrate with ChatGPT from OpenAI looks worse. Even with its privacy promises of not revealing much information and asking the user explicitly, these and other events in OpenAI should trigger serious concerns with Apple executives. The top Apple executives have said that they started with ChatGPT because it’s the best. Maybe this partnership won’t be around this time next year.
This article is a little bit of a red hearing. OpenAI is not apple, in the sense that they are not great at building user facing products. They are great at building the world's best AI models. They've known this since the inception of the GPT models, the only way you could have accessed these models is via API.
Later last year, we saw the release of GPT's text-davinci-003, and in an attempt to showcase this new model to the research community, they launched ChatGPT.
I think what we are seeing now is that chatGPT is best when it is close to existing applications. For example what 14 year old is using the chatGPT app vs the Snapchat AI Chat which uses the API internally.
The recent drop in usage could likely be attributed to such preferential shifts, further compounded by the timing of school holidays.
Time passing is the circumstance. ChatGPT knows the answers from SO, but for new languages the quality will degrade over time. ChatGPT already isn't great at answering questions about new version of Swift for example. It won't be able to use new languages / APIs either. Given how rarely the model is updated (officially anyway), it will always lag behind the new tech.
It may be actually in the interest of OpenAI to start paying SO for the data and make sure it flows to the community in some way. Otherwise they will be starved.
A big if. A lot of ChatGPT discussions seem to take for granted that it'll always be available/free/priced low enough that ~everyone has access to it. Seems more likely that at some point OpenAI will close it up and put it back behind an API.
True, but to be fair it was only recently that OpenAI really took over with ChatGPT and their api cost. Before that (just 3-4 months ago), the open source models were excellent competitors.
and based on how ChatGPT itself was built after seeing how customers were using their API, I'd be willing to bet that OpenAI will simply copy the most popular ones with an "official" version. Developers are going to act as free R&D for OpenAI again
Is it known about when OpenAI is closing ChatGPT? Any ideas if they are going to release a paid version or some subscription based approach? I genuinely don't believe that this is going to be free to use at least for now, when there is no competition on the market.
I'm guessing it's more likely that OpenAI just want's the data that hundreds of millions of users are searching for vs. the much smaller group of 'technical' users niche users who are already using ChatGPT.
Also, the fact that ChatGPT at least in its current iteration is more helpful for less knowledgeable users may have something to do with why OpenAI almost didn't release it. The elite engineers didn't get how revolutionary it would appear to the general public.
It is not a fluke that ChatGPT becomes popular in a time when alternative facts wins elections. A time when prominent business people look down at education and talks about smart people being dangerous. A time when you have to sit down with your dad for an hour to explain that the post he read on Facebook is not really a news article.
I am not sure it is up to OpenAI to solve all these problems. I don't think OpenAI are doing worse than anybody else.
What I personally dislike about ChatGPT right now is that it seems to have more and more difficulty to actually stay on the context of the chat. It has become a QnA.
Secondly. Hidden ads. It even advocates for Azure bs that has even been removed from Azure. Even on data ChatGPT was trained on. Is that just by chance?
I've noticed a substantial drop in the performance and usefulness of Chat GTP 4. It makes me think that OpenAI created Chat GTP almost by accident because they do not seem capable of refining and improving their model.
As anecdote, I just cancelled my subscription to ChatGPT a few days ago.
It was fun to experiment with, but it’s obvious that 90% of their development effort has been going into censoring their models instead of improving their utility. I saw no tangible improvements over months.
For example, the web browsing extension was released completely broken and then… remained broken. Meanwhile Phind.com has been doing web browsing very well and very fast.
The Wolfram extension was also useless, and in the same time period a Mathematica update was released with a far superior LLM notebook mode that is actually functional.
OpenAI also don’t allow access to their long context length capable models in the Chat web app.
I switched to the API subscription because it is billed based on consumption and I can use the 16K context GPT 3.5 models.
Meanwhile, despite announcements of “general availability” I still can’t access GPT 4 via an API and nobody has access to the 32K context version. That would be truly useful to me and worth paying for… but OpenAI does not want my money.
I guess I’ll just have to wait for Anthropic or Google to make a GPT 4 equivalent AI that I can access programmatically without have to prostrate myself in front of Sam Altman.
I suppose it was to be expected by IMHO this takes the wind out of the sails of the OpenAI / Apple deal. In the end they don't let OpenAI get into the internals of iOS / Siri, it's just a run of the mill integration. They actually are competing with ChatGPT and I assume eventually they expect to replace it and cancel the integration.
The OpenAI integration also seems setup to data mine ChatGPT. They will have data that says Customer X requested question Q and got answer A from Siri, which he didn't like and went to ChatGPT instead, and got answer B, which he liked. Ok, there's a training set.
I'm always wrong in prediction and will be wrong here but I'd expect openAI is a bad spot long term, doesn't look like they have a product strong enough to withstand the platform builders really going in AI. Once Siri works well, you will never open ChatGPT again.
reply