The people who think AI will distribute power rather than concentrate it are naive and optimistic at best. I prefer to call them delusional and insane. They keep repeating the same thing (developing technologies only entrenched giants can use at scale) and expecting different results (equitable society? where's the profit in that?).
The amount of arrogance in this field is fucking staggering. 50% of AI researchers think there's a 10% chance this tech makes us go extinct, and they rant and rave on the huge impacts this is going to have on all different aspects of our economy and society, but in the same breath they'll complain about government oversight.
Every time I see someone make an illogical prediction about AI, it's because they don't understand the frame problem.
Immense processing power is of no value, if you have no capability of grounding and regrounding it adaptively in reality. That's the job our bodies and our societies provide, and AIs have no way to do it, except taking on a human body and joining human society, at which point an argument can be made they're not AIs any more, but human cyborgs.
I take huge offense to this article. They claim that when it comes to AGI, Hinton and Hassabis “know what they are talking about.” Nothing could be further from the truth. These are people who have narrow expertise in one framework of AI. AGI does not yet exist so they are not experts in it, in how long it will be, or how it will work. A layman is just as qualified to speculate about AGI as these people so I find it to be infinitely frustrating when condescending journalists talk down to the concerned layman. This irritates me because AI is a death scentance for humanity — its an incredibly serious problem.
As I have stated before, AI is the end for us. To put it simply, AI brings the world into a highly unstable configuration where the only likely outcome is the relegation of humans and their way of life. This is because of the fundamental changes imposed on the economics of life by the existence of AI.
Many people say that automation leads to new jobs, not a loss of jobs. Automation has never encroached on the sacred territory of sentience. It is a totally different ball game. It is stupid to compare the automation of a traffic light to that of the brain itself. It is a new phenomenon completely and requires a new, from-the-ground-up assessment. Reaching for the cookie-cutter “automation creates new jobs” simply doesn’t cut it.
The fact of the matter is that even if most of the world is able to harness AI to benefit our current way of life, at least one country won’t. And the country that increases efficiency by displacing human input will win every encounter of every kind that it has with any other country. And the pattern of human displacement will ratchet forward uncontrollably, spreading across the whole face of the earth like a virus. And when humans are no longer necessary they will no longer exist. Not in the way they do now. It’s so important to remember that this is a watershed moment — humans have never dealt with anything like this.
AI could come about tomorrow. The core algorithm for intelligence is probably a lot simpler than is thought. The computing power needed to develop and run AI is probably much lower than it is thought to to be. Just because DNNs are not good at this does not mean that something else won’t come out of left field, either from neurological research or pure AI research.
And as I have said before, the only way to ensure that human life continues as we know it is for AI to be banned. For all research and inquires to be made illegal. Some point out that this is difficult to do but like I said, there is no other way. I implore everyone who reads this to become involved in popular efforts to address the problem of AI.
The little bit about people believing the "AIs will take over the world" non-sense is gold.
I am still shocked that Elon Musk seriously believes in the pseudosciency "well Google has gotten better so obviously we will build a self-learning self-replicating AI that will also control our nukes and also be connected in a way and have the capabilities to actually really kill all humans."
Meanwhile researchers can't get an AI to tell the difference between birds and houses.
EDIT: I looked a bit more into the research that these people are funding. A huge amount of it does seem very silly, but there is an angle that is valid: dealing with things like HFT algorithms or routing algorithms causing chaos in finance or logistics.
Well that's a little bit disappointing, because I thought they could surely get much more mileage from the idea than simply criticizing abuse of crowd workers and energy.
Thanks for your reading and analysis - when I read the paper I won't go in expecting quite so much.
An issue I have with the liberal criticism of AI is that they make lukewarm arguments and never quite get to the point where they insist that we must work communally to seize control of AI power from states and corporations, or reasonably sketch what that seizure of power would look like. This is why I feel a more radicalized approach is appropriate, where we do not assume that society as structured has the means to reform AI with nothing more than moralizing and wishful thinking. We must forcibly seize AI codes and build them to the purpose of revolution.
So long as researchers are willing to reify AI as a thing, the collective delusion will continue. I'm specifically looking at researchers focused on the ethics of AI; they set up AI as a thing more than other researchers. AI is not a thing. It's a field of research. Know thyself.
Anyone who believes this is either corrupt or a fool.
AI, like the current algorithms in use by various companies, will be used by powerful corporations against individuals without any means to fight against it, enabled by captured regulators who themselves don't care about regular people.
Tyler Cowen championing that ai won't take the job of a gardener or manual labourer, but will 'create' jobs in retraining is completely skipping the whole 'retraining because many many jobs will cease to exist'.
I can only hope this guy had a deadline and needed his $75 for an opinion piece, and thats why this dreck was published by bloomberg. If someone actually believes this nonsense they are outright ignoring the effect of these technological regimes on individual's choices and freedoms today.
Ok, so Yoshua Bengio, Geoffrey Hinton or Max Tegmark aren't able to comprehend or speculate about this? Seems surprising.
Edit: I'm not pandering to authority, I just believe the people I've quoted are actually very smart people who can reason very well and they have a valid opinion on the topic. They also have little financial interest in the success, failure or regulation of AI, which is important.
AI is that infamous field where the experts never saw any abrupt change coming. AI winter, deep learning, now LLMs. What makes you think they're correct at predicting the limitless-ness of the current approach, this time?
I have a serious trust issue with expert opinions based on beliefs, extrapolation, and gut feelings, instead of facts. Especially when they have a vested interest to ignore immediate issues at hand (privacy, power concentration etc) and tend to focus on hypotheticals that are supposed to come after higher order effects come into play. And especially when these opinions enter a feedback loop with an uncanny resemblance to a religion. Experts can convince themselves of anything. (source: being one in my area)
The idea that people will consume infinite qualities of zero information nonsense and become simultaneously disconnected from base-reality and unable to act, yet also converted into puppets with no will of their own and thus only actors for those who feed them drivel makes no sense.
AI can't simultaneously drive beople into theses opposite states. Because then I could just train an AI to output only "good" information and harvest "good" minds to manipulate for "good" social purposes.
I would make an AI that cultivates communities and never gets tired of the boring parts of organizing.
It's no different than the misinformation from the environment: is that a tiger (predator) or a bird (prey). is that food or poison? Whining about scale is unimaginative.
It's always the same with AI research: "we have something amazing but you can't use it because it's too powerful and we think you are an idiot who cannot use your own judgement."
It's rarely productive to take internet criticism into account, but it feels like AI is an especially strong instance of this. It seems like a lot of folks just want to pooh pooh any possible outcome. I'm not sure why this is. Possibly because of animosity toward big tech, given big tech is driving a lot of the research and practical implementation in this area?
> “Whether we consider it a tool, a partner, or a rival, [ai] will alter our experience as reasoning beings and permanently change our relationship with reality,” the authors write. “The result will be a new epoch.” If uttered by a Soylent-quaffing coder, that sentiment might be dismissed as hyperbole. Coming from authors of this pedigree, it ought to be taken seriously.
Predicting how AI will affect society’s sense of reality is best left to politicians and CEOs (or CS department heads)? They’re making a psychological/sociological claim here, not a business, political, or technical one. Are they qualified to predict what society will use as a basis for reality? Also they have plenty to gain by this agenda being believed (maybe not Henry I dunno).
Your argument boils down to “AI won’t be useful unless humans are twice as smart as they are” (your examples of the businesspeople and researchers), and thus doesn’t really say anything.
reply