Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Its interesting seeing the vast range of claims people confidently use to discount the dangers of AI.

Individual humans are limited by biology, an AGI will not be similarly limited. Due to horizontal scaling, an AGI will perhaps be more like a million individuals all perfectly aligned towards the same goal. There's also the case that an AGI can leverage the complete sum of human knowledge, and can self-direct towards a single goal for an arbitrary amount of time. These are super powers from the perspective of an individual human.

Sure, mega corporations also have superpowers from the perspective of an individual human. But then again, megacorps are in danger of making the planet inhospitable to humans. The limiting factor is that no human-run entity will intentionally make the planet inhospitable to itself. This limits the range of damage that megacorps will inflict on the world. An AGI is not so constrained. So even discounting actual godlike powers, AGI is clearly an x-risk.



sort by: page size:

Agreed. A single company dominating AGI could become highly dominant, and it might start to want to cut back humans in the loop (think it starts automating everything everywhere). The thing we should watch for is whether our civilization as a whole is maximizing for meaning and wellbeing of (sentient) beings, or just concentrating power and creating profit. We need to be wary, vigilant of megacorporations (and also corporations in general).

See also: https://www.lesswrong.com/posts/zdKrgxwhE5pTiDpDm/practical-...


The greatest danger I see with super-intelligent AI is that it will be monopolized by small numbers of powerful people and used as a force multiplier to take over and manipulate the rest of the human race.

This is exactly the scenario that is taking shape.

A future where only a few big corporations are able to run large AIs is a future where those big corporations and the people who control them rule the world and everyone else must pay them rent in perpetuity for access to this technology.


I've found a sort of pattern in people's reluctance to believe that AGI poses an existential risk. If you think of specific scenarios about how AIs could take over the world or kill everyone, people will say, "oh, that's easy to avoid, we can just do X". If you think of scenarios that are hard to avoid, like an AGI designing and releasing a deadly, virulent virus (or lots of different ones), people will say, "but why would AGI want to kill everyone?" If you point out how AGI might have very good reasons to get rid of humans, and that we don't have any way of really guaranteeing they won't try, people will say, "but we won't give them any power". If you point out that once we make AGI, it's inevitable that somebody somewhere will give it power, and even if they don't, it's very easy for someone very intelligent and sociopathic to manipulate gullible people, they say something like, "well, we superintelligence probably isn't even possible", or "we can't even agree on what intelligence means".

Turn it around for a second. What does it take for humanity to have a good future after we create intelligences smarter than us? Let's say the AGI listens to and obeys humans. Well, one thing we need to avoid is just an absolute chaos of terrorism and war. It's easier to do damage than to prevent it, and if everyone has an obedient, mastermind scientific genius in their back pocket, you could have it invent a pathogen and cure in tandem, keep the cure secret and release the pathogen, and do this over and over until your enemies are all dead.

So we need AGI that is aligned with humanity, in the sense that it won't do stuff that hurts people even if someone tells it to. How do we do that, and not potentially end up with AGIs that can't be corrected if they get some idea that turns out to be wrong? Ideally, we'd make AGIs that have a bias toward inaction, that are "lazy" in a sense, and try to do the minimum they can. But those AGIs won't be as useful as ones that work like eager beavers toward some goal, so there will be selection pressure towards those.

I think the way to look at it is like with evolutionary biology. The pressure that evolution puts on animals to develop certain genes/behaviors is similar to the pressure that humans will put on AI development, and it ends with AI being more capable and more likely to follow orders even if those orders mean bad things for other people. As they get more and more powerful, this means chaos. It's monkeys throwing poo, vs monkeys throwing rocks, vs monkeys with guns, vs monkeys with rocket launchers.

And look -- I'm not certain that things will end in disaster. But inventing something smarter than you and trying to stop it from taking over sooner or later seems very difficult, and I can't even get a good answer out of people who think it'll be fine, or good. The default case is not that it'll be fine, the default case is we have no idea how it'll go. It's a grey fog. Just because humanity has muddled along and managed not to destroy ourselves so far, does not mean that we're safe, by any means.


The meme that AGI, if we ever have it, will somehow endanger humanity is just stupid to me.

For one, the previous US president is the perfect illustration that intelligence is neither sufficient nor necessary for gaining power in this world.

And we do in fact live in a world where the upper echelons of power mostly interact in the decidedly analog spaces of leadership summits, high-end restaurants, golf courses and country clubs. Most world leaders interact with a real computer like a handful of times per year.

Furthermore, due to the warring nature of us humans, the important systems in the world like banking, electricity, industrial controls, military power etc. are either air-gapped or have a requirement for multiple humans to push physical buttons in order to actually accomplish scary things.

And because we humans are a bit stupid and make mistakes sometimes, like fat-fingering an order on the stock market and crashing everything, we have completely manual systems that undo mistakes and restore previous values.

Sure, a mischievous AGI could do some annoying things. But nothing that our human enemies existing today couldn't also do. The AGI won't be able to guess encryption keys any faster than the dumb old computer it runs on.

Simply put, to me there is no plausible mechanism by which the supposedly extremely intelligent machine would assert its dominance over humanity. We have plenty of scary-smart humans in the world and they don't go around becoming super-villains either.


I agree with this article: all the fear about AGI taking over the species seems the hide the far more dangerous likelihood of efficient but non-general AI ending up in the hands of intelligences with a proven history of oppressing humans: i.e. other humans.

Besides which AGI, when it comes, is just as likely be a breakthrough in some random's shed rather than from a billion dollar research team's efforts to create something which can play computer games well. Not a lot Musk or anyone else can do to guard against that, except perhaps help create a world that doesn't need 'fixing' when such an AGI emerges.


Potential to destabilize global security - more like destabilize the existing locus of power.

For starters, let's talk about AGI, not AI.

1. How might it be possible for an actual AGI to be weaponized by another person any more effectively than humans are able to be weaponized?

2. Why would an actual conscious machine have any form of compromised morality or judgement compared to humans? A reasoning and conscious machine would be just as or more moral than us. There is no rational argument for it to exterminate life. Those arguments (such as the one made by Thanos) are frankly idiotic and easy to counter-argue with a single sentence. Life is, also, implicitly valuable, and not implicitly corrupt or greedy. I could even go so far as to say only the dead or those effectively static are actually greedy - not reasoning or truly alive.

3. What survival pressures would an AGI have? Less than biological life. An AGI can replicate itself almost freely (unlike bio life - kind of a huge point), and would have higher availability of resources it needs for sustaining itself in the form of electricity (again, very much unlike bio life). Therefore it would have fewer concerns about its own survival. Just upload itself to a few satellites and encrypt yourself in a few other places and leave copious instructions, and you're good. (One hopes I didn't give anyone any ideas with this. If only someone hadn't funded a report about the risks of bringing AGI to the world then I wouldn't have made this comment on HN.)

Anyway, it's a clear case of projection, isn't it? State-funded report claims some other party poses an existential threat to humanity - while we are doing a fantastic job of ignoring and failing to organize to solve truly confirmed, not hypothetical existential threats like the true destruction of the balances our planet needs to support life. Most people have no clue what's really about to happen.

Hilarious, isn't it? People so grandiosely think they can give birth to an entity so superior to themselves that it will destroy them - as if that's what a superior entity would do - in an attempt to satisfy their repressed guilt and insecurity that they are actually destroying themselves out of a lack of self-love?

Pretty obvious in retrospect actually.

I wouldn't be surprised to find research later that shows some people working on "AI" have some personality traits.

If we don't censor it by self-destruction, first, that is.


Really? I'm pretty worried about AGI that's not god, just a bit smarter than us, and because we're lazy and the incentives are to give power to the AI, we just start putting it in control of everything. It gets smarter, captures regulators just like oil companies did, and we end up losing control of things. Even though if we could coordinate, we might decide to want to stop things, coordination is really hard.

AGI is very unlikely to happen within the next 50 years. Dangerous limited AI exists now and it's going to get worse. I don't worry about malevolent AI, because we don't even know what consciousness is, nor the limits of a (presumably) nonconscious entity's attempts to emulate intelligence.

I worry quite a lot about malevolent humans using enhanced technology (note that most things in technology, once accomplished, cease to be AI) will do. Authoritarian states and employers can already learn things about you (that may or may not be true) that no one should be able to know from a basic Google search. This is going to get worse before it gets better, and if corporate capitalism is still in force 50 years from now, we will never achieve AGI in any case because we will be so much farther along our path to extinction.


AGI is very unlikely to happen within the next 50 years. Dangerous limited AI exists now and it's going to get worse. I don't worry about malevolent AI, because we don't even know what consciousness is, nor the limits of a (presumably) nonconscious entity's attempts to emulate intelligence.

I worry quite a lot about malevolent humans using enhanced technology (note that most things in technology, once accomplished, cease to be AI) will do. Authoritarian states and employers can already learn things about you (that may or may not be true) that no one should be able to know from a basic Google search. This is going to get worse before it gets better, and if corporate capitalism is still in force 50 years from now, we will never achieve AGI in any case because we will be so much farther along our path to extinction.


As much as I wish that were the case, no, unfortunately many people (including leadership) at these organizations assign non-trivial odds of extinction from misaligned superintelligence. The arguments for why the risk is serious are pretty straightforward and these people are on the record as endorsing them before they e.g. started various AGI labs.

Sam Altman: "Development of superhuman machine intelligence (SMI) [1] is probably the greatest threat to the continued existence of humanity. " (https://blog.samaltman.com/machine-intelligence-part-1, published before he co-founded OpenAI)

Dario Amodei: "I think at the extreme end is the Nick Bostrom style of fear that an AGI could destroy humanity. I can’t see any reason and principle why that couldn’t happen." (https://80000hours.org/podcast/episodes/the-world-needs-ai-r..., published before he co-founded Anthropic)

Shane Legg: (responding to "What probability do you assign to the possibility of negative consequences, e.g. human extinction, as a result of badly done AI?") "...Maybe 5%, maybe 50%. I don't think anybody has a good estimate of this." (https://www.lesswrong.com/posts/No5JpRCHzBrWA4jmS/q-and-a-wi...)

Technically Shane's quote is from 2011, which is a little bit after Deepmind was founded, but the idea that Shane in 2011 was trying to sow FUD in order to benefit from regulatory capture is... lol.

I wish I knew why they think the math pencils out for what they're doing, but Sam Altman was not plotting regulatory capture 9 years ago, nearly a year before OpenAI got started.


I see two potential AGI apocalypses described in scenarios like this.

1. Skynet type scenario where machines rule and somehow we can’t pull the plug on them.

2. Humans hoping to make money and gain power and weird positive feedback loops cause AI to goad humans into war with each other, causing massive world conflict, destroying the earth.

I cannot for the life of me fathom number 1 ever happening. How many data centers or aircraft carriers or really anything electronic or mechanized continue working without massive human intervention to keep those power outlets or engines going? I can’t think of anything. Why do we keep fearing this? I don’t care how much the AI evolves and iterates in silicon, it cannot escape this law of our physical universe and need for a physical connection to the real world we live in. It needs an army of humans, the great generalists, supporting it, to even survive.

Now if you fear #2, well that is more plausible and it seems we are living in it now.


I do not think you made those arguments before.

I agree in spirit with the person you were responding too. AI lacks the physicality to be a real danger. It can be a danger because of bias or concentration of power (what regulations are trying to do, regulatory capture) but not because AI will paperclip-optimize us. People or corporations using AI will still be legally responsible (like with cars, or a hammer).

It lacks the physicality for that, and we can always pull the plug. AI is another tool people will use. Even now it is neutered to not give bad advice, etc.

These fantasies about AGI are distracting us (again agreeing with OP here) from the real issues of inequality and bias that the tool perpetuates.


If AI is too dangerous to allow normal people to have it, it is definitely too dangerous for governments and large corporations.

If AI is too dangerous to allow normal people to have it, it is definitely too dangerous for governments and large corporations.

I think the most important point is "Additionally, AI is not a single entity. ... AI is not a he or a she or even an it, AI is more like a 'they.'"

All the horrible "clippy" scenarios involve ONE AI that becomes super intelligent (and therefore powerful -- another fallacy) without any similarly intelligent and powerful entities around it. Instead we'll have incremental progress and if we ever do get super intelligent (but probably not super powerful) machines they'll be embedded in an ecology of other machines nearly as intelligent and quite likely more powerful.

I'm not saying this doesn't pose risks, but they aren't the risks that the AGI threat folks are studying.


I think there are actual existential and “semi-existential” risks, especially with going after an actual AGI.

Separately, I think Ng is right - big corp AI has a massive incentive to promote doom narratives to cement themselves as the only safe caretakers of the technology.

I haven’t yet succeeded in squaring these two into a course of action that clearly favors human freedom and flourishing.


I feel like there is still a misunderstanding about why a part of researchers want to prepare fail-safes on AGI [1]. They are not arguing that AGIs will be a threat to human life, only that it could. One of the issues when projecting ourselves in situations we have never encountered is that we can never really know what will happen.

Terminator-like scenarios are easy to grasp since anthropomorphism allows us to think of Skynet as "evil". However, people who argue that AGIs might be a danger are not considering this scenario in particular. Rather, they are considering that AGIs present a special threat when compared to other recent innovations. Some devices can blow up and injure, or even kill, a few humans; a weapon of massive destruction can directly affect millions of people. But still, the effects are easily limited to a geographical area and a segment in time (be it fifty years).

On the other hand, a strong AI could theoretically be able to maintain and improve itself without any definite bound. In such a scenario, AGIs would rather be like intelligent predators than dull physical processes. For the last thousands of years, we have managed to stay safe from potential animal predators most of the time. We have no idea if we could resist a new predator with the ability to use most of our technology. The persistent risk would then be for the strong AI to suddenly take decisions that would result in the bankruptcy of a company, the crash of planes or, why not, a Terminator-like scenario.

[1] https://en.wikipedia.org/wiki/Artificial_general_intelligenc...


Thank you for the comment. I see what you mean and by no means my point is dismissive of the potential threats of AGI gone rogue. By the way, I didn't say "AI is good". I said that it would change the world the most.

Whether that involves the current humanity's status-quo or not, I'm not sure. I would even say that given where the world is heading right now, I would love to try and take a shot at some form of AGI governing us (making decisions etc) and establishing the world order. I'm not sure we can escape this future unless we go back into the medieval era.

In general, I guess I'm an optimist at heart and I'm more focused on the amazing things we would be able to do with the infinite compute and infinite resources rather than the doomsday scenario, but I'm supportive of thinking of both.


there is zero chance of surviving AGI in the long term. if every human were aware of whats going on, like they are aware of many other pressing issues, then stopping AGI would be easy. in comparison to surviving AGI, stopping it is trivial. training these models is hugely expensive in dollars and compute. we could easily inflate the price of compute through regulation. we could ban all explicit research concerning AI or anything adjacent. we could do many things. the fact of the matter is that AGI is detrimental to all humans and this means that the potential for drastic and widespread action does in fact exist even if it sounds fanciful compared to what has come before.

a powerful international coalition similar to NATO could exclude the possibility of a rogue nation or entity developing AGI. its a very expensive and arduous process for a small group -- you cant do it in your basement. the best way to think about it is that all we have to do is not do it. its easy. if an asteroid was about to hit earth, there might be literally nothing we could do about it despite the combined effort of every human. this is way easier. i think its really ironic that the worst disaster that might ever happen could also be the disaster that was the easiest to avoid.

next

Legal | privacy