Are you really suggesting drone striking data centers?
What about running AI models on personal hardware? Are we going to require that everyone must have a chip installed in their computer that will report to the government illegal AI software running on the machine?
everyone already has computers that can "run AI". would be pretty ridiculous if some law was passed and everyone's computers were confiscated. that would be like a lobotomization of society. a very stupid and short-sited move i think.
Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.
How is this less violent than bombing data centers? His proposal is literally "bomb datacenters plus a bunch of other stuff".
I think it's safe to say that, if a large entity with monopoly of force wants to stop these models from being used, they probably could. We already have a global surveillance machine watching us all, and normalized content takedowns for lesser reasons like "copyright infringement" and "obscene/exploitative material". Actual manufacturing & distribution of compute power is fairly consolidated, and hacker culture seems to have lost its activist edge in demanding legal rights around general-purpose computing. The future seems bleak if your threat model includes state actors enforcing AI regulation due to state-level safety concerns (terrorism, wmd production, large-scale foreign propaganda, etc).
When AI gets smart enough to do bad things there will definitely be a lockdown. What happens when you can program your robot to commit crimes for you? You won't be able to load certain types of programs by law. They are already doing this with geofencing for drones.
AI is too powerful a technology to let it out there to the masses. People might use it for killer drones after all. All users of AI must be tightly controlled and registered with the authorities!
This is the problem with certain kinds of technology that are bumping up against the edge of innovation. They're too powerful and if these technologies get in the hands of the DIY set, governments will lose control so they have to DRM and regulate everything. Heck, it's a problem with old technology. Many weapons aren't that complicated technologically, but their production and use are tightly regulated.
Edit: I'm not saying this is a good thing, I'm just deconstructing their though process for tight control over AI tech going forward.
Oh boy. Legislating government control of all computational resources is not a path we want to go down. Read Vernor Vinge's "Rainbows End" for some fun ideas on how this screws everyone over. Watch Cory Doctorow's talk about the war on general-purpose computing for some more immediate concerns.
I wonder if nebulous fears about AI soon will be added to the ranks of famous justifications for horrendously overbearing laws, like stopping terrorism, the war on drugs, or thinking of the children.
That won't happen because the industry controls our governments too much. There is a lot of value to be produced before the algorithms become really dangerous.
Moreover, controlling AI research is ever harder than nuclear research. Creating a technological disadvantage compared to rogue states without such regulations does not seem like a good idea.
I fear government intervention would not stop the progress of AI, it would only replace some negatives with others. The cat's out of the bag, people will experiment and soon we may have AIs like this running on home computers -- I know that's what I will be experimenting with, unless we get some really draconian laws about what I can and cannot do with my own computer. And if the US government outlaws AIs or something, are we just going to watch as other nations capitalize on the advantages while we stagnate?
"Killer drones are a big concern....dangers of abuse, especially by authoritarian governments...AI can amplify discrimination and biases".
Bengio is specifically talking about malicious actors using AI. So saying "just unplug it" isn't really applicable, especially when it comes to governments.
What about attacks on people using said ai systems? You know, attacks used for manipulating our behaviour patterns, violate our privacy and steal our digital property.
I think we're probably heading toward another AI winter anyway, so this would be like the government covering it up until things thaw again.
But anyway, there's no way to enforce not running certain programs without some kind of Orwellian surveillance baked in to hardware that I would abhor more than Roko's basilisk, and you now make the people at the forefront of the technology criminals and geopolitical adversaries. It would be an ultra-bonehead move. As always, technology and law don't mix well.
This feels like Icarus' wings being melted. Tech gave us so much, but maybe this shows that it can go too far. Perhaps the Great Filter theory that at a certain point a civilization's technology becomes it's own demise, might have validity.
Banning certain AI work is going to be a great joke. Kind of like prohibition was in the 20s. It's not like a nuke that requires a lot of infrastructure and manpower. These AI attacks only need a computer from Best Buy to accomplish. Hell, building the box yourself is far cheaper and more efficient. Unless we all agree that all computers should be tracked like guns, plus monitored, I don't think there's a good solution out there.
This deepfake problem hasn't even scratched the surface of what's to come.
I think authoritarian regimes hooking AI up to process data is already bad enough, I don't think having the AI "run" those systems is even being discussed in the article, the opposite really, instead of sci fi scenarios of an AI a la HALL 9000 making decisions, its really just allowing for a more powerful black box to "filter out the undesirables du jour".
And I think we could extend the definition of authoritarian regime to include any and all private interests/companies that can leverage this
This is why AI software must not be restricted, so that ordinary people and the civic minded can develop personal and public AI systems to counter corporate AI. The future of AI is adversarial.
Now freedom to develop AI software doesn't mean freedom to use it however you please and its use should be regulated, in particular to protect individuals from things like this. But of course people cannot be trusted, so you need to be able to deploy your own countermeasures.
Just require a license for matmul in parallel, obviously. Only responsible, safe companies will be able to get the licenses. Then we don't have to define those pesky terms like "intelligence" or "autonomy" or explain how the rogue AI will manipulate everyone with mind viruses or self replicate itself to the botnet. Roguenet?
It's the only way to save democracy.
You know, come to think of it, it's funny how the Russian disinformation is now AI disinformation, instead of human disinformation. As we know, Photoshopping single photos of smoke rising into the sky is notoriously labor intensive, requiring teams, if not the entirety of state-level sanctioned organizational output, of human beings. The game has changed, and democracy is at stake.
I think the oppressiveness will come from software. Who writes it, and what values does the software reflect? Does it reflect yours?
The governance structure wants to code their rules into a machine that will execute those rules without being able to apply human discernment. Humans are meant to be out of the loop - their rules and diktats are meant to be all we need. Ai + robotics has brought this possibility onto the horizon.
The issue really then, is in people's acceptance that someone can write meaningful rules for them to follow. Hence we see jobs such as AI ethicist. As if they can do anything other than justify the immoral decisions corporations/government make.
The collective that still believe government has their best interests at heart and are unable to refuse following the worst of us, will drive us into technological tyranny.
You are proposing an authoritarian nightmare.
reply