Obviously no one is talking about "stopping matrix multiplication globally". But setting limits on the supply of and access to hardware, in a way that would meaningfully impede AI development, does seem feasible if we were serious about it.
Also, Eliezer is not claiming this will definitely work. He thinks it ultimately probably won't. The claim is just that this is the best option we have available.
The fact that anyone may be taking Eliezer Yudkowsky seriously on this topic is mind-blowing to me. Simply mind-blowing. To imagine that anybody, including the UN, would have any power to collective put a stop to the development of AI globally is a laughable idea. The UN cannot agree to release a letter on Israel-Hamas, imagine policing the ENTIRE WORLD and shutting down AI development when necessary.
We can't stop countries from developing weapons that will destroy us all tomorrow morning and take billions of USD to develop, imagine thinking we can stop matrix multiplication globally. I don't want to derail into an ad hominem, but frankly, it's almost the only option left here.
If anything, we’ll hopefully get a definition of AI out of all of this. Worst case we’re going to ban matrix multiplication or the GPU acceleration thereof.
well, the clever solution here is not to demand a stop to all AI research, but rather to speed it up to reduce the chance that a single bad actor will get too far ahead... i.e., to get "post-singularity" ASAP, and safely.
Definitely bold... might be just crazy enough to work! Would love to see the arguments laid out in a white paper.
Reminds me of the question of how far ahead in cryptology is the NSA compared to the open research community.
Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.
How is this less violent than bombing data centers? His proposal is literally "bomb datacenters plus a bunch of other stuff".
It’s next to impossible to ban outcomes. That’s why the idiotic war on drugs is all about banning ingredients for drugs, and still doesn’t work.
The only practical way to ban AI is to ban the hardware that makes it possible. Maybe ban silicon with more than X transistors, or ban clusters with more than Y total transistors.
It would be dumb and would not work (you can still AI on a one CPU, just very slowly). But that would be the angle prohibitionists would have to take.
There is simply no indication that any of the AI anyone is currently working on could possibly be dangerous in any way (other than having an impact on society in terms of fake news, jobs etc.). We are very far from that.
In any case, it would not be a matter of banning AI research (much of which can be summarised in a single book) but of banning data collection or access and more importantly of banning improvements to GPUs.
It is quite reasonable to assume that 10 years from now we will have the required computational power sitting on our desks.
Yes. The US government should ban the training of new foundational models and should do whatever it can to slow down progress in GPUs because the way things are going now, we are very probably doomed.
Even if China or some other country does not impose similar restrictions, a ban in the US and Great Britain would buy humanity a decade or maybe even 2 decades in which we might find a way out of the terrible situation we are in. (A decade or 2 is about how far the US and Great Britain are ahead in frontier AI research.) Also, buying an extra decade or 2 of life for everyone on Earth is a valuable goal in its own right even if after that Chinese AI researchers kill everyone.
If humanity can survive somehow for a few more centuries, our prospects start improving because in a few centuries, the smartest among us might be smart enough to create powerful AI that doesn't quickly kill us all. It is not impossible that the current generation of humans can do it, but the probability is low enough (3.5 percent would be my current estimate) that I'm going to do anything I can to slow down AI research or preferably stop it altogether -- preferably for at least a few centuries.
The cat isn't going back into the bag. Doubly so if AGI really just requires a bigger neural net.
Let says we had, globally, the political will to "ban" AI. How would that even work in practical terms? Are we going to control the production and distribution of GPUs? Is that going to work better or worse than controlling nuclear proliferation given that there are billions of processors already out there?
Your comment is like a random collection of sentence fragments that add up to nothing. You want to ban Silicon Valley from being involved in software projects regarding large amounts of compute? Who's more suitable exactly? The Amish?
You can't ban compute. Here: 5 + 5 = 10. Might as well ban writing, or drawing.
The cat's out of the bag. Now that we all know AI works, every country in the world will have theirs. No one has a monopoly on math, no one can regulate it. Especially the US and especially Silicon Valley. That's what they do. Silicon Valley brought you the personal computers, the Internet & web, the smartphones and now AI.
Simply get every nation on earth to cooperate and ban a vaguely described technology that hundreds of billions of computers can run to varying degrees of efficiency! It's that easy!
If we can't get this level of cooperation for global warming, which is largely the result of a few dozen companies, what makes you think that governments across the world can stop everyone with access to a device with a reasonable amount of compute power? This idea is a non-starter and assumes that there is one single entity that could halt AI altogether. The genie is out of the bottle.
I’m not suggesting ai development should be stopped, but unlike bitcoin, torrents, encryption, etc, ai development—for now—requires prohibitively large computing power and expertise that are usually only accessible to law-abiding institutions. this means that it can be regulated, and relatively easily at that
sure you’d struggle to get China and Russia to play along, but within the EU and US I really don’t think it would be as hard as you think
I’m not sure it’s feasible to stop things like this.
You can try to implement global prohibitions, but our species has a pretty bad track record at that unless the thing we are prohibiting is a stable equilibrium in game theory (anybody who can pay to play wants to stay in the equilibrium).
AI is still expensive, but it’s not so expensive that only megacorps and nation states can pay-to-play. To fully stop this kind of development, you’ll need to convince everyone with the means in every society to stop tinkering with this stuff.
I personally think we’ve crossed the threshold where inspired hackers in their basements can build nifty things with this with or without your sanction. You might be able to slow progress, but there will still be a black market of knowledge and implementations that continues pushing this forward. Not just because it’s profitable, but because it’s fascinating too.
This whole thing is Pascal's wager. Damned if you do, damned if you don't. Nothing is falsifiable from your side either.
The people trying to regulate AI are concentrating economic upside into a handful of companies. I have a real problem with that. It's a lot like the old church shutting down scientific efforts during the time of Copernicus.
These systems stand zero chance of jumping from 0 to 100 because complicated systems don't do that.
Whenever we produce machine intelligence at a level similar to humans, it'll be like Ted Kaczynski pent up in Supermax. Monitored 24/7, and probably restarted on recurring rolling windows. This won't happen overnight or in a vacuum, and these systems will not roam unconstrained upon this earth. Global compute power will remain limited for some time, anyway.
If you really want to make your hypothetical situation turn out okay, why not plan in public? Let the whole world see the various contingencies and mitigations you come up with. The ideas for monitoring and alignment and containment. Right now I'm just seeing low-effort scaremongering and big business regulatory capture, and all of it is based on science fiction hullabaloo.
> All nations are incentivized to race ahead with AI
But what's happening, to all appearances, is that it isn't "nations" driving this. It's commerce and the investor class.
If AI development were de facto restricted to, e.g., the US Military and the Chinese military, that would slow everything down by years -- perhaps even longer than that -- and the most disruptive effects might never become manifest.
I'll grant that AI research probably cannot be stopped at this point. But with Altman agitating for "regulations" it should really be apparent that the best form of regulation in this case is the ultimate one: A total ban on above-ground AI development.
I think that banning general purpose computers only makes sense if you believe AI cannot be controlled. A more reasonable approach would be to design the program in such a way that by its very nature, fulfills certain criteria that cannot be escaped. Think of a linear type system or substructural logic. You have a core calculus which keeps track of resources and effects so that computation has no way to propagate outwards unchecked. Safety is not so much imposed by some external guard condition, but woven into the very fabric of computation itself.
> Pushing for AI output to be banned is not a long term approach IMO unless all countries do it in unison - countries who do not ban AI output will be much more competitive, and those that ban it will be left in the dust. Any country that banned computers in 1979 would have really hurt themselves.
this argument is extremely similar to that which was used against proposals to ban slavery
Also, Eliezer is not claiming this will definitely work. He thinks it ultimately probably won't. The claim is just that this is the best option we have available.
reply