Are. As I and others have predicted, the executive order was passed defining a hard limit on the processing/compute power allowed without first 'checkin in' with the Letter boys.
Funny way of doing it, going around saying "you should regulate us, but don't regulate people smaller than us, and don't regulate open-source".
> you must report to the US government about how you created and tested your model.
If you're referring to the recent executive order: only when dual-use, meaning the following:
---
(k) The term “dual-use foundation model” means an AI model that is trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters, such as by:
(i) substantially lowering the barrier of entry for non-experts to design, synthesize, acquire, or use chemical, biological, radiological, or nuclear (CBRN) weapons;
(ii) enabling powerful offensive cyber operations through automated vulnerability discovery and exploitation against a wide range of potential targets of cyber attacks; or
(iii) permitting the evasion of human control or oversight through means of deception or obfuscation.
This is some terrible reporting. If you actually read the executive order[0], while there is lip service paid to several general issues associated with AI like job displacement and privacy violation, they actually are pretty narrow in what they mean by AI safety. They're specifically looking to restrict access to AI models that can be used to aid biological weapons development by modelling complex biochemical interactions.
While I'm unconvinced this is the best way to tackle the issue, and there's always the possibility for overreach, the story as presented, namely that AI companies are about to become burdened with a vast and nebulous regulatory infrastructure prompted by vague and poorly informed fears about AI, is bunk.
Is there any statement in this Executive Order that increases the bar for smaller AI companies? Most of the statements are about funding new research or fostering responsible use of the AIs, and the only statement that would add burden to AI companies seems to be the first one: Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government. And only the most powerful AI systems have such a requirement.
The comments will inform the drafting of regulations on open weight models under the Biden executive order on AI using his powers under the Defense Production Act.
> Government regulations setting limits on the powers attached to AI might.
This only works if all governments agree on this rule. AI is an economic competitive advantage, and it’s in every countries best interest that advances in AI are developed in their borders.
> So I think it is in our best interest to play nice with this technology and not use it for questionable purposes or else the inevitable law that will need to be put in place is "it is illegal to trade or own AI models capable of GENERATING illegal content"
> Creating new safety and security standards for AI, including measures that require AI companies to share safety test results with the federal government
This will be...interesting to see how it's "tested" for safety / security.
It won't. People have never defeated a useful new technology that destroys jobs. People widely like using these tools. You'd need to ban their use worldwide. If the US bans AI, China and other countries will become dominant in AI. Assuming AI continues to improve, there's an extreme advantage for any country that has it.
> AI also poses socio-ethical consequences that don’t exist on the same scale as computer software, necessitating more restrictions like behavioral use restrictions
There's plenty of software that has, or could have, similar restrictions. Consider software that allows you to plan vantage points for a shooting or estimate the impact of using explosives at various locations. And the government regulates all sorts of software for export/download because it has military use--everything from development tools to high performance chips that could be used to crunch numbers for a nuclear program, CAD software that can help you build (or destroy) a bridge, etc. The CPUs and GPUs themselves are regulated at certain performance levels, I think.
Special session and panel talking about the White House Executive Order about AI issued on Monday – and the UK AI Safety Summit taking place this week. Here is the U.S. White House Fact.
> The agreement is unlikely to slow the efforts to pass legislation and impose regulation on the emerging technology. Lawmakers in Washington are racing to catch up to the fast-moving advances in artificial intelligence. And other governments are doing the same.
Our government is just getting a firm grasp on email -- something tells me there's just going to be a hotbed of lobbyists writing the laws around AI... again.
> And if they tried, they would hobble the companies involved.
Well, yeah, that's the point of regulation: to limit corporate behavior. There are plenty of other highly-regulated industries in the US; why shouldn't AI be one of them?
Are. As I and others have predicted, the executive order was passed defining a hard limit on the processing/compute power allowed without first 'checkin in' with the Letter boys.
https://www.whitehouse.gov/briefing-room/presidential-action...
reply