Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

I saw this on twitter (can’t find the original tweet) and I can’t get it out of my head. It said that “AI safety IS the business model of AI as means to get regulatory capture.”

Basically if you convince everyone that AI safety is so critical and only megacorp can do it right, then you can get the government to enforce your monopoly on creating AI models. Competition gone. That scares me. But this tactic is old as time.



sort by: page size:

Lots of people see it. It's very clear to me and many others that a lot of the AI safety push is about going straight for regulatory capture and effectively outlawing competition.

When Sam Altman is calling for AI regulation, yes it is a conspiracy by big AI to do regulatory capture. What is this regulation aimed at making AI safer that you refer to anyway? Because I certainly haven't heard of it. Furthermore, there doesn't seem to be any agreement on whether or how AI, at a state remotely similar to the level it is at today, is dangerous or how to mitigate that danger. How can you even attempt to regulate in good faith without that?

Also AI safety in terms of protecting corporation profits (regulatory capture).

> seize power via regulatory capture

Not even a little bit. "Stop" is not regulatory capture. Some large AI companies are attempting to twist "stop" into "be careful, as only we can". The actual way to stop the existential risk is to stop. https://twitter.com/ESYudkowsky/status/1719777049576128542

> the push to close off AI with farcical licensing and reporting requirements

"Stop" is not "licensing and reporting requirements", it's stop.


As AI ascends, we'll see an economic pivot: more chip foundries, amplified research, and cheaper compute for AI training and deployment. Having the best won't always be the goal—'good enough' often suffices, setting a baseline for those lagging and capping the gains of front-runners, countering the dystopian view where AI elite monopolize power indefinitely.

Regulatory capture, however, is probably the most significant risk. If companies can make it illegal to compete with them in the name of "safety", a dystopian future is not just possible, but likely.


> regulate AI (read: let a handful of US government aligned closed-source companies have complete monopoly on AI under the guise of ethics)

Regulatory capture would certainly be a bad outcome. Unfortunately, the collateral damage has been to also suppress regulation aimed at making AI safer, advocated by people who are not interested in AI company profits, but rather in arguing that "move fast and break things" is not a safe strategy for building AGI.

It's been remarkably effective for the original "move fast and break things" company to attempt to sweep all safety under the rug by claiming that it's all a conspiracy by Big AI to do regulatory capture, and in the process, leave themselves entirely unchecked by anyone.


The entire notion of “safety” and “ethics” in AI is simply a Trojan horse for injecting government control and censorship over speech and expression. That’s what the governments get out of it. The big AI players like OpenAI, Microsoft, Amazon, Google, etc. are incentivized to go along with it because it helps them through regulatory capture and barriers to competition. They also make some friends with powerful legislators to avoid pesky things like antitrust scrutiny.

Legislation should not restrict the development or operation of fundamental AI technologies. Instead laws should only be built on the specific uses that are deemed illegal, irrespective of AI.


The entire notion of “safety” and “ethics” in AI is simply a Trojan horse for injecting government control and censorship over speech and expression. That’s what the governments get out of it. The big AI players like OpenAI, Microsoft, Amazon, Google, etc. are incentivized to go along with it because it helps them through regulatory capture and barriers to competition. They also make some friends with powerful legislators to avoid pesky things like antitrust scrutiny.

Legislation should not restrict the development or operation of fundamental AI technologies. Instead laws should only be built on the specific uses that are deemed illegal, irrespective of AI.


Absolutely. Those folks arguing for AI regulation aren't arguing for safety – they're asking the government to build a moat around the market segment propping up their VC-funded scams.

Yep. Monopoly through regulatory capture is the end goal of most of the AI x-risk nerds.

The safe business won’t hold very long if someone can gain a short term business advantage with unsafe AI. Eventually government has to step in with a legal and enforcement framework to prevent greed from ruining things.

“If Congress at some point is able to pass a strong pro-innovation, pro-safety AI law. . . . "

With AI this seems like an untenable position. We already have companies that are pushing the bounds with AI constantly and ever further, while telling us we shouldn't worry about the future dystopian implications.

Trying to reign these people and companies in with government regulation at this point doesn't seem feasible.


AI can be dangerous, but that's not what is pushing these laws, it's regulatory capture. OpenAI was supposed to release their models a long time ago, instead they are just charging for access. Since actually open models are catching up they want to stop it.

If the biggest companies in AI are making the rules, we might as well not rules at all.


The AFR piece that underlies this article [1] [2] has more detail on Ng's argument:

> [Ng] said that the “bad idea that AI could make us go extinct” was merging with the “bad idea that a good way to make AI safer is to impose burdensome licensing requirements” on the AI industry.

> “There’s a standard regulatory capture playbook that has played out in other industries, and I would hate to see that executed successfully in AI.”

> “Just to be clear, AI has caused harm. Self-driving cars have killed people. In 2010, an automated trading algorithm crashed the stock market. Regulation has a role. But just because regulation could be helpful doesn’t mean we want bad regulation.”

[1]: https://www.afr.com/technology/google-brain-founder-says-big...

[2]: https://web.archive.org/web/20231030062420/https://www.afr.c...


> Instead of regulating the development of AI models, the focus should be on regulating their applications, particularly those that pose high risks to public safety and security. Regulate the use of AI in high-risk areas such as healthcare, criminal justice, and critical infrastructure, where the potential for harm is greatest, would ensure accountability for harmful use, whilst allowing for the continued advancement of AI technology.

I really like this proposed model. Are there good arguments against this?


I'll just copy/rephrase my comment that got buried in a thread last night:

The fear of large corporations controlling AI is an argument against regulation of AI. Regulation will guarantee that only the biggest, meanest companies control the direction of AI, and all the benefits of increased resource extraction will flow upward exclusively to them. Whereas if we forego regulation (at least at this stage), then decentralized and community-federated versions of AI have as much of a chance to thrive as do the corporate variants, at least insofar as they can afford some base level of hardware for training (and some benevolent corporations may even open source model weights as a competitive advantage against their malevolent competitors).

It seems there are two sources of risk for AI: (1) increased power in the hands of the people controlling it, and (2) increased power in the AI itself. If you believe that (1) is the most existential risk, then you should be against regulation, because the best way to mitigate it is to allow the technology to spread and prosper amongst a more diffuse group of economic actors. If you believe that (2) is the most existential risk, then you basically have no choice but to advocate for an authoritarian world government that can stamp out any research before it begins.


If we continue on this trajectory, I have a suspicion that the big players will increasingly cry “danger!” and, as Sam Altman has done already, call for government regulation of AI. Having potential upstarts buried in red tape is how monopolies and oligopolies sustain their positions in a lot of industries.

>security theater leading to a regulatory moat. Which is almost certain to help profit margins at established AI companies.

Yeah I think this is my biggest worry given it will enable incumbents to be even more dominant in our lives than bigtech already is (unless we get an AI plateau again real soon).


Yep, seems like that’s exactly it.

Bob Hayes, quoting Gary Marcus: "It’s time for government to consider frameworks that allow for AI research under ... ethical standards and safety, while considering a pause on the widespread public dissemination of... AI technologies." https://twitter.com/bobehayes/status/1632146399692292097

Gary Marcus: “Still haven’t heard any serious argument against this proposal.” https://twitter.com/GaryMarcus/status/1632146619645988864

Marc Andreessen: “It worked for COVID! Seriously though. If AI really is existentially dangerous, then it can't be worked on anywhere, as the risk of a lab leak is still catastrophic. The only answer is a global authoritarian dictatorship of cosmic proportions.“ https://twitter.com/pmarca/status/1632231156807786496 and https://twitter.com/pmarca/status/1632235642691411970

Loki: “There are only two answers: either everyone needs to chill-out and admit it's not really that dangerous or the PRC needs to have their computing power involuntarily reduced via any means.” https://twitter.com/LokiJulianus/status/1632237941555691522

Marc: “Yes. A lot of places in the world are going to have to be glassed.” https://twitter.com/pmarca/status/1632238233403936768

So, yeah, as suspected, it’s a reductio of the “AI safety requires an authoritarian government response” position. He’s not saying nuke China’s server farms, he’s actually warning that authoritarian government solutions for AI safety are going to end up nuking China’s server farms.

So assuming you think glassing China is bad, you’re actually agreeing with Marc, making this a sincere case of The Worst Person You Know Just Made a Great Point: https://clickhole.com/heartbreaking-the-worst-person-you-kno...

next

Legal | privacy