And safely here means non disruptive to established businesses. If they create a tool so powerful that it can replace entire professional classes, that would just mean people can bypass the employers and get their value directly from an API.
OpenAI needs to make sure they have a business model where they can charge enterprise fees and provide value to corporations, not directly to people. ChatGPT was merely a publicity stunt. I do believe they got scared of how much value people could derive independently from it tho, hence the "regulate this" pressure.
OpenAI have a good business model here, though possibly a bit unethical.
Shopify (who recently laid me off but I still speak highly of) locked down the public access to ChatGPT's website. But you could use Shopify's internal tool (built using https://github.com/mckaywrigley/chatbot-ui) to access the APIs, with access to GPT4. And it was great!
So look at this from OpenAI's perspective. They could put up a big banner saying "Hey everyone, we use everything you tell ChatGPT to train it to be smarter. Please don't tell it anything confidential!". And then also say "By the way, we have private API access that doesn't use anything you say as training inputs- maybe your company would prefer that?"
The louder they shout those two things, the more businesses will line up to pay them.
And the reason they can do this: they've built a brilliant product that everyone wants to use, everyone is going to use.
The company just demoed ChatGPT integration in their product today in a webinar.
It was interesting, but security concerns have prevented the company from doing more. The only thing demoed was a pretty simple data pipeline creation process with GPT prompts. Essentially just push data from source A to sink B.
They clearly could have done much more than that, but anything beyond that would require pushing potentially sensitive data to OpenAI's APIs. That's been a big red flag at my employer.
It'll be interesting to know how those security concerns can be resolved.
Whether or not you think it’s a fad, ChatGPT has close to 200 million users actively using their product. No crypto company was ever close to that.
Whether OpenAI’s API is a viable or risky product is a secondary and separate question. Yeah, there are a lot of wrappers out there. But that doesn’t matter with regards to the usefulness of LLMs generally.
This has nothing to do with ChatGPT. An api end point will be just as vulnerable if it's called from any application. There's nothing special about an LLM interface that will make this more or less likely.
It sounds like you're weaving science fiction ideas about AGI into your comment. There's no safety issue here unless you think that ChatGPT will use api access to pursue its own goals and intentions.
I strongly disagree, and think this statement is basically completely wrong. I am part of the public and I'm benefitting tremendously from the product openAI has built. I would be very unhappy if my access to chatgpt or copilot was suddenly restricted. I extract tons of value (perceieved) from their product, and they receive some value in return from my subscription. Its a win-win.
I hope it isn't true! I'd expect them and other LLM orgs to support some plugins where it isn't worth them building the functionality internally but also replacing the ones that they can do better or with less or little effort. Hard to be upfront and honest about what the future looks like this early in the applied statistics arena. Maybe OpenAI should be deferring more to their Head of PR?
This is the usual "Netflix attempting to become HBO faster than HBO can become Netflix" [1] and a critical eye to where the work is being done and the value being delivered.
EDIT: btown mentions "Just because ChatGPT+Plugins isn't going to be the center of people's lives" in their comment, and I think this is somewhat the point folks are concerned about. ChatGPT (LLM interfaces in general, broadly speaking) might be the IDE of the white collar future, versus an API you call. And maybe that is okay if they can deliver a superior experience to where people live today (Excel, email client, word processor, other SaaS tools, a development environment). Half joking, maybe they should acquire Superhuman [2] and rebrand to that versus "ChatGPT", because that is the ability consumers and enterprises are paying for: to give people doing work superhuman ability.
(disclosure: paying OpenAI customer, no other relation to any org mentioned in this comment)
As ChatGPT/OpenAI’s products grow in popularity so will its value to hackers. I have no doubt people using GPT are discussing sensitive details about operations of their business/personal lives with it.
So OpenAI should take cybersecurity seriously. Credit card details are nothing compared to the chat logs. Chat logs will be of high value.
Also I’ve seen the idea floating around, especially with typed languages like TypeScript, that developers write just the signature of function and have GPT/Copilot implement it. If developers trust the output and don’t care… What are the chances someone can trick GPT into producing unsafe code? There are attack vectors via the chat interface, training data, physical attacks via employees. Phishing an OpenAI employee to gain convert access to the infra/model.
If I was an intelligence agency, gaining covert access to OpenAI backend would be primary objective.
You left out the part where they say it IS okay to use it on the backend of a hosted application or service.
The way I read this is that they are reserving their right to build an API product like OpenAI based on this model, and control that particular part of the market. So it can't just be a wrapper putting another brand on it that is open to general purpose use of the model.
But you can build hosted chat-based applications. They just need to be applying the model to some use case.
You and I are not the 'end user' of this software. You and I are customers of OpenAI's customers.
The companies implementing ChatGPT are going to restrict -our- access to the APIs and the most relevant conversations.
It is in this manner that both OpenAI and the companies its selling its technologies to will benefit. In this manner OpenAI profits from both sides. You and I are left out though unless we want to pay much more than we're paying now.
Always worth remembering that only the finetuned ChatGPT is so filtered. The raw GPT models accessible through OpenAI's API are not, and will happily generate violence, gore, pornographic content, etc. They never use the phrase "As an AI language model..." - if you ask gpt3.5-turbo "What are you", it will even claim to be a human being.
The "AI safety" argument that OpenAI is so fond of is nothing more than cover for policies that avoid making OpenAI look too socially disruptive. The APIs will always be accessed through someone else's frontend so they don't care about making them "safe".
The main thing that ChatGPT has gotten better at is rejecting jailbreaks and refusing to go off the reservation. It has been demonstrated that "safety" trades off against "capability". I'm sure OpenAI has evaluations that demonstrate the improvements they've been making to "safety" have not come at the cost of capability, but I'd bet those evaluations are wrong (by being insufficient). It also wouldn't surprise me if the tradeoff between "safety" and capability is just intrinsic.
You can have your model talk like HR or think like a mad scientist, but not both equally well.
> What we do is fairly easy, the whole "value" is in some fairly simple prompt on top of openAI APIs.
We are seem a lot of these apps nowadays. People come up with a prompt, send to OpenAI with the user data, hide it behind a certain web interface and it's an app they can charge for.
I've thought about this a million times for scratching my own itches but I just talk to ChatGPT and it's fine.
It probably is. I'd bet for 99.99% of their users, simply using the API directly would be much cheaper. The ChatGPT premium subscription is overpriced if you compare it to the cost of each API call.. at least according to OpenAI's pricing.
Many comments miss the mark as they fail to make the crucial distinction between ChatGPT and GPT-4. GPT-4 is the underlying model one can indeed have direct access to on a pay-per-request pricing scheme. ChatGPT is an application built on top of GPT-4 which manages how the 'context' of your interaction is passed in on a per-request basis. I don't doubt the spokesperson for a minute: from my own experience, the underlying GPT-4 models have not changed and I sincerely believe that OpenAI will be careful on this front, given that they are aiming to build a once-in-a-generation company that provides a stable platform for other firms to build products on top of.
The ChatGPT application, on the other hand, and how it manages context etc has certainly changed in the intervening time. That is completely expected as even and perhaps especially OpenAI is figuring out how to build applications on top of LLMs, which means balancing how one can get the best quality results out of the model while making ChatGPT in particular a profitable business.
Stratechery has analyzed this problem for OpenAI in the most detail I've seen. I imagine the company is in something of a bind figuring out how to invest between the APIs themselves and ChatGPT. On the one hand, the latter is incredibly successful as a consumer app with a lead it will be difficult for rivals to catchup with and it is likely plugins will provide a good revenue basis. On the other hand, there is certainly a greater business opportunity in being the foundation for an entire generation of AI products and taking BPs off of revenues -- if and only if GPT4 indeed has a significant moat over the opensource alternatives. For the moment, it would seem they will have to hedge both bets as we see how the consumer space and the competition between models heats-up.
I don't think it "legal matters" or not is important.
OpenAI is marketing ChatGPT as accurate tool, and yet a lot of times it is not accurate at all. It's like.. imagine Wikipedia clone which claims earth is flat cheese, or a Cruise Control which crashes your car every 100th use. Would you call this "just another tool"? Or would it be "dangerously broken thing that you should stay away from unless you really know what you are doing"?
I believe the API (chat completions) has been private for a while now. ChatGPT (the chat application run by OpenAI on their chat models) has continued to be used for training… I believe this is why it’s such a bargain for consumers. This announcement allows businesses to let employees use ChatGPT with fewer data privacy concerns.
OpenAI needs to make sure they have a business model where they can charge enterprise fees and provide value to corporations, not directly to people. ChatGPT was merely a publicity stunt. I do believe they got scared of how much value people could derive independently from it tho, hence the "regulate this" pressure.
reply