AI can be dangerous, but that's not what is pushing these laws, it's regulatory capture. OpenAI was supposed to release their models a long time ago, instead they are just charging for access. Since actually open models are catching up they want to stop it.
If the biggest companies in AI are making the rules, we might as well not rules at all.
They're not immune to the law, the law is just written so that all AI companies have to do all the bullshit safety stuff that OpenAI was already doing.
Now to be fair OpenAI's safety stuff is actually useful but it's useful for "programmable customer service agent" and companies that want to have a chatbot and not have it say fuck. But those tweaks aren't free, and come at the expense of model performance. So OpenAI pushed regulators hard on this and now everyone has to has to pay the same cost wooo!
It means you can't chip away at OpenAI's market by being better than them at specialized tasks because you have a raw completely unaligned non-instruction tuned model you use for say generating code. Because that model could be used to write viruses oohhh spooky.
AI is clearly not the problem here. Our preexisting society and massive wealth inequality is. I'll add though that many, many open models of high quality that smaller companies can build on exist and these models and companies are likely to be the most harmed by over-the-top regulation. OpenAI will survive jumping through hoops just fine.
We know OpenAI wants this field to get regulated to hell, so this looks like an attempt to generate arguments for AI regulations. The aim isn't to protect against AI but to protect against competitors, so it doesn't matter to them what you do with it.
Potentially unpopular opinion- this may have been done in part with the objective of enforcing regulations or setting up AI laws. Given OpenAI’s position that comes across as regulatory capture.
There are good (or at least sincere) actors talking about AI regulation. I don't agree with all of them, but I believe they're trying to help solve problems that they do believe actually exist.
OpenAI is not in that group. OpenAI is looking for regulatory capture and propaganda about their AI capabilities, that's it. It is not worth taking them at their word when they talk about AI safeguards. Same with companies like Meta/Facebook, same with Google.
It makes no sense to ask these companies to be in charge of safeguards. Not only are the safeguards they propose likely to be ineffective; in some cases (watermarks, limits on what is and isn't "too powerful") they're likely to be actively harmful, entrenching corporate power and making it even easier for those companies to abuse AI to harm ordinary people.
The entire notion of “safety” and “ethics” in AI is simply a Trojan horse for injecting government control and censorship over speech and expression. That’s what the governments get out of it. The big AI players like OpenAI, Microsoft, Amazon, Google, etc. are incentivized to go along with it because it helps them through regulatory capture and barriers to competition. They also make some friends with powerful legislators to avoid pesky things like antitrust scrutiny.
Legislation should not restrict the development or operation of fundamental AI technologies. Instead laws should only be built on the specific uses that are deemed illegal, irrespective of AI.
The entire notion of “safety” and “ethics” in AI is simply a Trojan horse for injecting government control and censorship over speech and expression. That’s what the governments get out of it. The big AI players like OpenAI, Microsoft, Amazon, Google, etc. are incentivized to go along with it because it helps them through regulatory capture and barriers to competition. They also make some friends with powerful legislators to avoid pesky things like antitrust scrutiny.
Legislation should not restrict the development or operation of fundamental AI technologies. Instead laws should only be built on the specific uses that are deemed illegal, irrespective of AI.
That's not how companies usually lobby for regulations. Much more often, they get regulation that is full of fangs, laws and rules which made specifically for them and that they can navigate with their huge legal departments, but which would deter any smaller competitors.
So all of this "end of civilisation" talk from OpenAI and other AI company leaders is nothing more than a competitive play to secure their position in the space.
That's one thing that's tricky about the regulation, is that so many are behind OpenAI...and they are coincidentally the companies behind pushing regulation on AI. We have to be careful who is a real worried market actor and who is just looking to slow the competitive advantage. Also vice-versa is true, we can't just listen to OpenAI/Microsoft on the issue. Another thing is simply national security, the threat of China getting better AI than US companies, is also a huge risk. I feel sorry for regulators honestly, this one is going to be much harder than your standard run of the mill thing.
They really don't want this to happen, which I think is a big part of the push behind the "AI is dangerous" narrative. They want to put in place regulations and 'safeguards' that will prohibit any open-source, uncensored, or otherwise competitive models.
It might be too far, but to me this piece seems aimed at increasing concerns among regulators about AI. OpenAI might view regulation as a means of ensuring their competitive edge as other big players enter the AI space.
The proper way to regulate is to disallow certain functions in an AI. Doing it that way wouldn't kneecap the competition to OpenAI, though, where requiring a license does.
Regulation is going to make it worse, by picking winners and losers, and will leave states with weapon-grade AI hidden from the public. The only way to fight this is with open technology, like this, publishing state of the art models for everyone, and creating open datasets. It is only a problem today because AI is siloed behind FAANGs and openAI. We need more openness, and i think it will come.
If the biggest companies in AI are making the rules, we might as well not rules at all.
reply