> “Whether we consider it a tool, a partner, or a rival, [ai] will alter our experience as reasoning beings and permanently change our relationship with reality,” the authors write. “The result will be a new epoch.” If uttered by a Soylent-quaffing coder, that sentiment might be dismissed as hyperbole. Coming from authors of this pedigree, it ought to be taken seriously.
Predicting how AI will affect society’s sense of reality is best left to politicians and CEOs (or CS department heads)? They’re making a psychological/sociological claim here, not a business, political, or technical one. Are they qualified to predict what society will use as a basis for reality? Also they have plenty to gain by this agenda being believed (maybe not Henry I dunno).
> In practice, the confusion around AI’s capacities serves as a pretext for imposing more metrics upon human endeavors and advancing traditional neoliberal policies. The revived AI, like its predecessors, seeks intelligence with a “view from nowhere” (disregarding race, gender, and class) — which can also be used to mask institutional power in visions of AI-based governance.
Later:
> The manufactured AI revolution has created the false impression that current systems have surpassed human abilities to the point where many areas of life, from scientific inquiry to the court system, might be best run by machines. However, these claims are predicated on a narrow and radically empiricist view of human intelligence. It’s a view that lends itself to solving profitable data analysis tasks but leaves no place for the politics of race, gender, or class. Meanwhile, the confusion over AI’s capabilities serves to dilute critiques of institutional power. If AI runs society, then grievances with society’s institutions can get reframed as questions of “algorithmic accountability.” This move paves the way for AI experts and entrepreneurs to present themselves as the architects of society.
This is implausible, to say the least. Isn't this basically a conspiracy theory? Not to mention, huge tech companies were already huge before the AI hype. I'm 4 pages in and I still don't know what's the author's point, other than promoting conspiracy theories and spreading FUD about AI.
This is a truly awful article. I'm mildly positive towards AI because I think people tend to react with fear and examine the risks without equivalently examining the rewards, and I think that's happening here and the rewards seem amazing. But the arguments of risk are serious and plausible and this article is just lazily dismissive of them.
> AI is not competing for resources with human beings.
A superintelligence would absolutely be competing for resources (mainly electricity and cooling) with human beings.
> Rather, we provide AI systems with their resources, from energy and raw materials to computer chips and network infrastructure.
A superintelligence will be able to convince anyone it wants to do anything for it. Even the previous President did a pretty good job at causing adulation and democracy-threatening personal loyalty from people he had not actually met in person. Imagine how amplified and personal the effect could be.
> For now, AI depends on us, and a superintelligence would presumably recognize that fact and seek to preserve humanity since we are as fundamental to AI’s existence as oxygen-producing plants are to ours. This makes the evolution of mutualism between AI and humans a far more likely outcome than competition. Moreover, the path to a fully automated economy — if that is the goal — will be long, with each major step serving as a natural checkpoint for human intervention.
The author is literally relying on us outsmarting the superintelligence (collectively, with coordination!). It doesn't sound like they've come to terms with the concept of superintelligence at all. Or with the state of human global coordination problems, come to think of it.
> AI cannot physically hunt us.
This is likely to become literally false sometime soon if it hasn't already, but even if it doesn't, the AI doesn't have to. It just has to convince another human that the human is in love with it and it wants the human to kill a bunch of people, then scale the process.
> AI’s impact on the climate is up to us.
Our own impact on the climate is up to us. Our collective decision-making is.. suboptimal.
> If we really think that superintelligent AI presents a plausible existential risk, shouldn’t we simply stop all AI research right now? Why not preemptively bomb data centers and outlaw GPUs?
You are literally linking to the person who is the most prominent voice of AI existential risk, who is seriously suggesting doing exactly that.
One of the most striking things about this piece is the difference between the claims of AI practitioners and pundits.
LeCun and Ng are making precise, and much more modest, claims about the future of AI, even if Ng is predicting a deep shift in the labor market. They are not treating strong AI as a given, unlike Bostrom and Nosek.
Bostrom's evocation of "value learning" -- "We would want the AI we build to ultimately share our values, so that it can work as an extension of our will... At the darkest macroscale, you have the possibility of people using this advance, this power over nature, this knowledge, in ways designed to harm and destroy others." -- is strangely naive.
The values of this planet's dominant form of primate have included achieving dominance over other primates through violence for thousands of years. Those are part of our human "values", which we see enacted everyday in places like Syria.
Bostrom mentions the possibility of people using this advance to harm others. He is confusing the modes of his verbs. We are not in the realm of possibility, or even probability, but of actuality and fact. Various nations' militaries and intelligence communities have been exploring and implementing various forms of AI for decades. They have, effectively, been instrumentalizing AI to enact their values.
Bostrom's dream of coordinating political institutions to shape the future of AI must take into account their history of using this technology to achieve dominance. The likelihood that they will abandon that goal is low.
Reading him gives me the impression that he is deeply disconnected from our present conditions, which makes me suspicious of his ability to predict our long-term future.
It's consistent if the author comes out and predicts other things non-tool (i.e. agentic) AI are likely to do in the future. Otherwise it's a suspiciously specific argument.
> When the people who have real, expert knowledge of something all tell you one thing and the people who have something to gain from getting your attention by promoting sensational opinions and cherry-picked facts tell you something else you should rationally assess their motives and weight that when deciding who's views more closely approximate the truth.
Those with deep expertise in machine intelligence (and related foundational concepts, such as philosophy of mind, linguistics, psychology, neuroscience, etc.) definitely do not "all tell you one thing" (hence the "ethics board" for DeepMind). I won't claim to be such an expert (though I have multiple degrees on related topics), but if you, e.g., review Nick Bostrom's C.V., and read his work, you'll find few people more qualified to comment. He's brought a very sober, clear-headed, and decidedly non-sensationalistic assessment to these issues, and devoted an entire book to the risk posed by "Superintelligence". When very smart, knowledgeable, thoughtful, and seemingly well-adjusted people are willing to put themselves "out there", it's worth paying attention.
Exponential change looks tiny until it's really not. If recursively self-improving A.I. is possible, it might only require one relatively short bit of code to get off the ground, and then it's basically game-over (depending on the A.I.'s objective function). Many people possess imaginations rich enough to see how this could come to be.
Further, claiming that something which has a clear and rational path to becoming dangerous doesn't "have the potential to cause such great harm", especially when lots of relevant trend lines are nearing vertical, is extremely foolish.
It is incredibly easy to unsympathetically criticize another viewpoint, especially when that viewpoint is outside of the mainstream. Those espousing such "sensational opinions" rarely win more friends than they lose, as your (and many others') comments would attest to.
>ai can help us in countless ways, from finding new cures for cancer to discovering solutions to the ecological crisis.
Can it?
"Storytelling AIs", i.e. LLMs, are not autonomous. Their goal is to predict text. They predict text. They are predictors.
So, in the article, we casually drop the claim: "AI can make exponentially more powerful AI." Is it feasible for a predictor to run in a loop and exponentially improve, by itself? No. Is it potentially feasible for a predictor to run in a loop and generate outputs that may be used by a human to further improve its own outputs, or Rube Goldberg'd together by a human to steadily improve predicted outputs in general or for niche? Yes. But exponentially, and realistically, no. Hardware constraints, architectural constraints, performance constraints, monetary constraints, correct?
Let's go back to fear, for a second. Nukes cannot invent more powerful nukes, but a nuke can cause a "Buck" Turgidson can nudge an organization or machinery in the direction for more powerful nukes. LLMs can potentially produce the output to feed into more powerful "AI" in that way, with a "Buck" nudging it. But that's with the assumption that the predictor's outputs don't reduce performance. Can the predictor spit out a novel description of some architecture that may be used by a human that is "exponentially" more powerful? Unlikely.
After re-reading the article, perhaps the joke is on me, because I detect some vague satirical wit. But when someone wants a government, the elected congressional or federal body in which elected individuals are knocking on dementia, catatonically voting on party lines, and are still figuring out email, to regulate for some vague idea of "safety" for some "AGI" entity they cannot even define, I smell bullshit. Not to mention, regulatory capture. Maybe that's the real "death of democracy."
We've been through this over the past few years. Hysteria leads to regulation and government intervention, leads to large corporations to be able to plug their nose and survive via acquired capital, and/or use their resources to comply with the bureaucratic bullshit, while smaller businesses die, people lose their life savings, income inequality grows, a generation is set back for the rest of their life: we're worse off and outcomes are not improved as per the original goal of the regulation, in the end.
Exactly my thoughts. A hint of condescension and lack of self-awareness - the camp Sam Altman seems to be a part of ("regulate AI, it's an existential threat to humanity!") is just as much a prediction of the future as anything else. Yet, somehow, he seems to be subtly implying that he's "more" correct.
Marc Andreessen has been relatively level-headed about the topic of AI recently on Twitter, and it would be nice to see other industry figureheads be less emotionally involved and more scientifically rigorous in their assessment of the industry. The debate is devolving into an ego battle (especially with a post like this!), and it's rather unfortunate.
Edit: Additionally, Altman appears to be primarily attacking a strawman with this article. "Superhuman" intelligence already exists. The emergent intelligence (via technological amplification) of society is, by definition, super-human. What's less realistic is anticipating a human-like artificial intelligence that would, in any way, represent an existential threat to the human race. There are many, many problems with the latter argument. (From a technological, philosophical, economic, and evolutionary perspective.)
> "AI" is best understood as a political and social ideology rather than as a basket of algorithms. The core of the ideology is that a suite of technologies, designed by a small technical elite, can and should become autonomous from and eventually replace, rather than complement, not just individual humans but much of humanity.
On economical and social damage: The article mentions that less than 10 percent of the US workforce is counted as employed by the technology sector, that essential contributions of others aren't counted as "work" or compensated, that this has "hollowed out" the economy and contributed to the concentration of wealth in an elite, and that this has contributed to concentrating power, as well.
On mutual exclusivity: The article proposes paying people for contributions, rather than writing them off as non-work. It also mentions humans with "AI resources" outperforming AI alone.
> worried about what effects sophisticated "dumb" AI will have on human culture and expectations
It is already happening, but it is not a new phenomenon - just the prosecution of the effects of widespread inadequate education.
The undesirable effect namely is, an increase of overvaluing cheap, thin products - a decrease in recognizing actual genius, "substance". For example, some seem to be increasingly contented with "frequentism" as if convinced that language is based on it - while it is of course the opposite - one is supposed to state what is seen, and to state "plausible associations" would have been regarded as a clear sign of malfunction. There seems to be an encouragement to take for "normal" what is in fact deficient.
Some people are brought to believe that some shallow instinct is already the goal and do not know "deep instinct", trained on judgement (active digestion, not passive intake).
The example provided fits:
> production-ready digital art
For something to be artistic it has to have foundational justifications - a mockery of art is not art. The Artist chose to draw that line under a number of evaluations that made it a good choice, but an imitator, even in a copy, used the ("frequentist", by the way note) evaluation that the other is an Artist - and there is a world of depth difference between the two.
The difference is trivial, and yet, many are already brought to confuse the shallow mockery and the deep creation.
I was referring to their messages to the public about how to react to advances in AI. Your interpretations are about AI’s impact on society. I don’t disagree with your interpretations, but it definitely doesn’t make mine wrong. They’re different facets of the same conversation.
> I. Technology Has Finally Reached The Point Where We Can Literally Invent A Type Of Guy And Get Mad At Him
> AI’s beliefs
> AIs’ Opinions
Ai's this, Ai's that, this personifies math in the most misguided, misleading way possible. Articles like this are insanely dangerous. This is not a person, this is not a being, it is not alive, it does not think, it does not hold opinions.
> Can we entirely rule out the possibility of AI researchers making advancements over time, and then eventually some research lab building one that's super intelligent compared to us? Why?
That's not the thing we should be worried about, or at least that's the point I got from the article. The thing we should be worried about is the way machine learning is applied in the here-and-now: the ethics of big data social networks, the robustness of complex (API-driven) systems, and so on. I fully agree that these issues are much more pressing and worrying than some emergent super-intelligence.
But I thought the article was weak in its discussion of dreams, creativity, and linear "flat data". It links to a June 2015 popular mechanics article about applying a genetic search optimization algorithm to discover gene regulatory networks, downplaying it as not-true-intelligence. But it does not mention deep neural networks and their higher-dimensional abstractions, particularly the psychedelic "inception" images.
The author also mentions "linear reasoning", and says he expects we'll learn more about intelligence from stem cell and Alzheimer's research, and the "tissues surrounding neurons, and the roles they play in contextual regulation." As if to set up some dichotomy between machine and biological intelligence. But what about deep neural networks?? I'm not sure how that would affect the author's arguments, but I'd like to see it discussed.
I take huge offense to this article. They claim that when it comes to AGI, Hinton and Hassabis “know what they are talking about.” Nothing could be further from the truth. These are people who have narrow expertise in one framework of AI. AGI does not yet exist so they are not experts in it, in how long it will be, or how it will work. A layman is just as qualified to speculate about AGI as these people so I find it to be infinitely frustrating when condescending journalists talk down to the concerned layman. This irritates me because AI is a death scentance for humanity — its an incredibly serious problem.
As I have stated before, AI is the end for us. To put it simply, AI brings the world into a highly unstable configuration where the only likely outcome is the relegation of humans and their way of life. This is because of the fundamental changes imposed on the economics of life by the existence of AI.
Many people say that automation leads to new jobs, not a loss of jobs. Automation has never encroached on the sacred territory of sentience. It is a totally different ball game. It is stupid to compare the automation of a traffic light to that of the brain itself. It is a new phenomenon completely and requires a new, from-the-ground-up assessment. Reaching for the cookie-cutter “automation creates new jobs” simply doesn’t cut it.
The fact of the matter is that even if most of the world is able to harness AI to benefit our current way of life, at least one country won’t. And the country that increases efficiency by displacing human input will win every encounter of every kind that it has with any other country. And the pattern of human displacement will ratchet forward uncontrollably, spreading across the whole face of the earth like a virus. And when humans are no longer necessary they will no longer exist. Not in the way they do now. It’s so important to remember that this is a watershed moment — humans have never dealt with anything like this.
AI could come about tomorrow. The core algorithm for intelligence is probably a lot simpler than is thought. The computing power needed to develop and run AI is probably much lower than it is thought to to be. Just because DNNs are not good at this does not mean that something else won’t come out of left field, either from neurological research or pure AI research.
And as I have said before, the only way to ensure that human life continues as we know it is for AI to be banned. For all research and inquires to be made illegal. Some point out that this is difficult to do but like I said, there is no other way. I implore everyone who reads this to become involved in popular efforts to address the problem of AI.
I'd look instead at the Atlantic article [0] that is the germ of of the book. The authors, also including Dan Huttenlocher of MIT, opine that this AI era is potentially as big as the Enlightenment era. Thinking will not be the same. The tone, for lack of a sophisticated term, is woo-woo meta. These are smart experienced people who I would assume would be making a conscientious effort to make an important point. But, it's lost on me. And, I too am skeptical of Kissinger and of Schmidt. Personally, I think when we look back on the period, the story will be about our tragic inaction on climate. Hopefully there will be a heroic coda to that. By comparison, AI will be just another mode of industrial automation.
This sounds a lot like satire. This excerpt for example is blatantly self-contradictory:
>We’ve found zero [scenarios] that lead to good outcomes. // Most AI researchers think good outcomes are more likely. This seems just blind faith, though. A majority surveyed also acknowledge that utter catastrophe is quite possible.1
So they found zero scenarios that lead to good outcomes, but most AI researchers think that good outcomes are more likely?
Brushing off a majority view as w*shful "thinking", and then backing up the argument with a... majority view?
__________________
Anyway. The problem with AI-driven decisions is moral in nature, not technological. AI is a tool and should be seen as such, not as a moral agent that can be held responsible for its* own actions. Less "the AI did it", more "[Person] did it using an AI".
> Michael Graziano, a professor of psychology and neuroscience at Princeton University, says he thinks AI could create a “post-truth world.” He says it will likely make it significantly easier to convince people of false narratives, which will be disruptive in many ways
Significantly easier? I would have thought that it would get harder to convince people of anything.
The article isn't doing anything more then quoting experts in the field.
From the article:
“It is certainly much more than a stochastic parrot, and it certainly builds some representation of the world—although I do not think that it is quite like how humans build an internal world model,” says Yoshua Bengio, an AI researcher at the University of Montreal.
If you remain unconvinced then the only conclusion I can make is that your an expert yourself on a scale of even higher in eminence then Yoshua Bengio here. Also don't forget Geoffrey Hinton, the Father of the modern revolution of AI, you must be more of an expert than him.
Let's be real. These people are saying something along the lines that it's more then a stochastic parrot and we aren't sure what's going on. But you're saying it's absolutely nothing more than a parrot and your unhappy with with pop Sci media quoting experts who are just saying they don't know?
Are you saying pop Sci media should quote you? Because you absolutely know what's going on and that it's definitely nothing more than statistics? I'm asking a stupid question here because I don't think this is what you're saying. You're not stupid, you know that what these experts say have merit.
So my question for you is why do you remain so unconvinced in the face of experts and other intelligent people who clearly say no one understands? Your opinion here actually represents a large group of people who very violently deny/dismiss what even many experts are saying and I'm curious as to why?
Predicting how AI will affect society’s sense of reality is best left to politicians and CEOs (or CS department heads)? They’re making a psychological/sociological claim here, not a business, political, or technical one. Are they qualified to predict what society will use as a basis for reality? Also they have plenty to gain by this agenda being believed (maybe not Henry I dunno).
reply