If we end up with a compute governance model of AI control [1], this sort of thing could get your door kicked in by the CEA (Compute Enforcement Agency).
""" In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause. """
Not even a little bit. "Stop" is not regulatory capture. Some large AI companies are attempting to twist "stop" into "be careful, as only we can". The actual way to stop the existential risk is to stop. https://twitter.com/ESYudkowsky/status/1719777049576128542
> the push to close off AI with farcical licensing and reporting requirements
"Stop" is not "licensing and reporting requirements", it's stop.
The list of recommendations at the end starts with a moratorium on open source AI until the following controls are implemented: you’re not allowed to release a model without a government license, you have to release your training data, and you’re liable for misuse of your model.
It also starts with a fawning review of the benefits of closed models only accessible via API.
David is a former Meta employee and is now pushing hard for global AI regulation.
Gatekeeping public access and innovation in AI leaves us with a world subjugated to powerful corporate interests and their lobby groups. We won’t even have the opportunity to teach AI or conduct lab experiments without a government license, being forced to release training data, and we’ll have to assume liability which will have a massive chilling effect.
This article asks for a global overreach based on the premise of “Those with wealth, power and influence know best and will keep you safe.”
> The laws we have already cover malevolent and abusive uses of software applications.
They don't. Or not nearly enough. Otherwise you wouldn't have automated racial profiling, en masse face recognition, credit and social scoring etc.
And it's going to get worse. Because AI is infallible, right? Right?!
That's why, among other thing, EU AI regulation has the following:
- Foundation models have to be thoroughly documented, and developers of these modes have to disclose what data they were trained on
- AI cannot be used in high-risk applications (e.g. social scoring etc.)
- When AI is used, its decisions must be explicable in human terms. No "AI said so, so it's true". Also, if you interact with an AI system it must be clearly labeled as such
Well-sourced conference report. Actually saves me a lot of time. To summarize: it's still the Wild West in AI. But there is broad recognition that governance is essential
I'll just add one more report, as if the dozens already mentioned were not enough. It's from Berkeley's Center For Long-Term Cybersecurity. And it addresses the enormous challenge of securing AI systems from adversarial attack. A glimpse into the vortex of how the "industrialization of AI" creates a self-perpetuating, fractal-like cycle of eternal dependencies. Requiring us to create ever stronger AI to protect and serve the AI on which our new engineering platforms will be founded upon
When Sam Altman is calling for AI regulation, yes it is a conspiracy by big AI to do regulatory capture. What is this regulation aimed at making AI safer that you refer to anyway? Because I certainly haven't heard of it. Furthermore, there doesn't seem to be any agreement on whether or how AI, at a state remotely similar to the level it is at today, is dangerous or how to mitigate that danger. How can you even attempt to regulate in good faith without that?
I saw this on twitter (can’t find the original tweet) and I can’t get it out of my head. It said that “AI safety IS the business model of AI as means to get regulatory capture.”
Basically if you convince everyone that AI safety is so critical and only megacorp can do it right, then you can get the government to enforce your monopoly on creating AI models. Competition gone. That scares me. But this tactic is old as time.
So they put the people in the forefront of creating AI to steer AI Policy when it comes to safety? There are 4 Civil Rights people on the board. The rest are autocrats of the software industry.
Yes, and that's not happening right now, and it's a Big Fucking Problem. I am pretty sure this will someday be a prime case study in AI ethics courses. A Waco type of "how could this happen" moment.
When AI systems have the ability to interact with and influence the real world, there is indeed a potential for both positive and negative outcomes. The actions performed by an AI in the real world can have significant consequences, and if not properly controlled or regulated, it could lead to unintended harm.
To mitigate the risks, it is crucial to have robust safety measures, ethical guidelines, and oversight mechanisms in place. These measures should ensure that the AI system operates within predefined boundaries and follows strict ethical standards. Transparent and accountable governance is necessary to monitor and regulate the system's behavior, preventing malicious use or unintentional harm https://gbfmapps.com/fmwhatsapp-download/.
Basically, as I understand it, it divides AI systems (in the broadest sense Machine Learning sense) into risk categories: unacceptable risk (prohibited), high risk, medium/other risk, and low risk.
Applications in the high risk category include medical devices, law enforcement, recruiting/employment and others. AI systems in this category will be subject to the requirements mentioned by most people here (oversight, clean and correct training data, etc).
Medium risk applications seem to revolve around the risk of tricking people, for example via chatbots, deepfakes etc. In this case they require to “notify” people that they are interacting with an AI or that the content was generated by AI. How this can be enforced in practice remains to be seen.
And the low risk category is basically everything else, from marketing applications to ChatGPT (as I understand it). Applications in this category would have no mandatory obligations.
That's terrifying. This simple requirement would be trivial to enforce automatically, and yet nobody gives a fuck.
It's unbelievable how fast-and-loose people are playing the topic of AI safety. If a strong AI is ever actually developed, there is no chance it will be successfully contained.
I'm far more worried about how they will try to regulate the use of AI.
As an example the regulations around PII make debugging production issues intractable as prod is basically off-limits lest a hapless engineer view someone's personal address, etc.
How do they plan to prevent/limit the use of AI? Invasive monitoring of compute usage? Data auditing of some kind?
so what you're saying is we should form a software engineer guild, that prohibits members from using AI and then use our collective power to force companies not to use it
We need to MAKE SURE that AI as a technology ISN'T controlled by a small number of powerful corporations with connections to governments.
To expound, this just seems like a power grab to me, to "lock in" the lead and keep AI controlled by a small number of corporations that can afford to license and operate the technologies. Obviously, this will create a critical nexus of control for a small number of well connected and well heeled investors and is to be avoided at all costs.
It's also deeply troubling that regulatory capture is such an issue these days as well, so putting a government entity in front of the use and existence of this technology is a double whammy — it's not simply about innovation.
The current generation of AIs are "scary" to the uninitiated because they are uncanny valley material, but beyond impersonation they don't show the novel intelligence of a GPI... yet. It seems like OpenAI/Microsoft is doing a LOT of theater to try to build a regulatory lock in on their short term technology advantage. It's a smart strategy, and I think Congress will fall for it.
But goodness gracious we need to be going in the EXACT OPPOSITE direction — open source "core inspectable" AIs that millions of people can examine and tear apart, including and ESPECIALLY the training data and processes that create them.
And if you think this isn't an issue, I wrote this post an hour or two before I managed to take it live because Comcast went out at my house, and we have no viable alternative competitors in my area. We're about to do the same thing with AI, but instead of Internet access it's future digital brains that can control all aspects of a society.
The Blumenthal-Hawley Framework Outlines AI Safety Legislation that America Desperately Needs. We wholeheartedly support the following elements of the framework and urge lawmakers to codify them:
- issuing licenses for advanced general-purpose AI models like GPT-4
- requiring developers to uphold common-sense safety practices
- gaining assessments of the AI landscape through an oversight body
- ensuring developers assume liability for harms from their systems
- structuring controls against unintended proliferation of advanced AI
> The bill creates a registry for all high-performance AI hardware. If you "buy, sell, gift, receive, trade, or transport" even one covered chip without completing the required form on time, you have committed a CRIME.. The Frontier Artificial Intelligence Systems Administration.. can straight up compel testimony and conduct raids for any investigation or proceeding, including speculative "proactive" investigations. This really is math cops.. There is a criminal liability section. For Doing Math. Or for attempting to do math, or for not telling the gov that you're doing math.. The Administrator can literally seize and destroy all the hardware and software.. conscript troops..
[1] https://podcasts.apple.com/us/podcast/ai-safety-fundamentals...
reply