Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Yes, salient risks seem more threatening than they actually are, but for a similar reason, invisible risks are more threatening than they seem. Which is why the future does not look as bright as this article is claiming.

In the last year or so I have become interested in the study of existential risks -- low-probability events which could extinguish humanity as we know it. Things like catastrophic nuclear war, biotechnology or nanotechnology overrun, artificial intelligence overrun, supervolcano explosions and asteroid impacts. There are few people researching these things, despite the huge potential downside to not researching them, because the risks aren't things that our amygdala responds to.

If you're interested in this stuff, there's much work to be done - check out the Lifeboat Foundation, Singularity Institute and Future of Humanity Institute.



sort by: page size:

I feel there's a much higher chance that humanity will drastically change its own fate (for better or worse) in the next hundred years, and any forecasts beyond that have very wide error bars. Artificial intelligence is the big one (it ends the current era of "business as usual" no matter whether it's friendly or not), but there's also nanotech, superviruses coming from desktop bio-hackery, mind uploading, good old nuclear terrorism, etc. For "business as usual" to continue and things like climate change to stay relevant, we need to dodge all of the above, which is difficult.

For a more thoughtful take on the future of humanity, Google for the keywords "existential risk". Bostrom's writeup is a good start: http://www.nickbostrom.com/existential/risks.html


What? Even in popular culture it has been uncontroversial since at least the nuclear age to posit that humanity faces serious existential risks in the near and mid future. You're questioning this? You think there is nothing at all risky about any of the many powerful new technologies we've developed over the last couple centuries? I'm a little perplexed. What do you know that we all don't?

Or are you merely referring to the OP's tone? I am looking mainly at their substantive point, not their tone.

> The third paragraph makes you sound like you ARE a religious fundamentalist, albeit from a different tradition. "Fundamentally change as a species?" "Great Filters?" "The road is fraught with many perils?" "Unimaginable suffering?" I don't mean to be rude, but do you realize how you sound?


Existential risk = extinction of humanity.

I'm not sure how there are degrees of extinction of humanity that are bad.


I think existential risks to our species of much higher magnitude exist today. Climate change and pollution for instance. I think a "possible existential threat in the future" is a weaker case for philanthropy than existing ones.

I agree; if current levels of existential risk aren't enough to unite humanity, I don't think anything survivable will do so.

We are talking about existential threats to humanity. It might be hard to believe but humans ability to bring about bad outcomes is growing at a very fast pace.

Key players like Facebook throwing in the towel because “we don’t see an easy way to stop contributing to problems that are growing in size and can potentially destabilize democracies, world ecosystems, or a few decades from now hunan existence” is not an option.

The problems we have today are growing exponentially in seriousness. Human beings need to learn to get along in ways they never needed to before both due to resource stress and technological powers we never had before.

Facebook’s amplification of many human weaknesses is only one of many risk factors. But I don’t think many young people realize how easily humans have fallen into disasters in the past, which amplified by progress could easily become existential today.


Humans being able to invent new existential threats is a very recent development. The way that we have been addressing them so far does not exactly fill me to the brim with confidence.

For other types of existential risks we are either in just in along for the ride (wide availability of nuclear weapons) lack data or concrete means of tackling the problem (AI, solar flares and whatnot) or it's something already linked to climate change problem management (decreased resources, overpopulation). The greying of the population is probably the second biggest story other than the climate.

For human rights, as tragic as they are the scale is just tiny in comparison. And the biggest boon for those rights is a functioning and wealthy society. Remove that with climate change and demographic troubles and they will be gone in a puff of smoke.


Toby Ord talks about this very problem in his book "The Precipice: Existential Risk and the Future of Humanity" [1] -- I totally recommend it if you are into this subject, although I have to admit I am pretty pessimistic after reading it, as it seems that the only way to avoid existential risk is by preparing through international collaboration at a global level. Precisely what I do not think will happen any time soon in the current climate.

[1] - https://www.amazon.com/Precipice-Existential-Risk-Future-Hum...


A significant element of the present era is that several distinct existential risks humans face are either self-imposed or self-inflicted.

Climate change, overpopulation, mass extinction, resource exhaustion, the threats of nuclear war and winter, self-imposed mass epidemic (particularly biowarfare), long-term chronic pollution (especially lead, mercury, and dioxins), ozone layer, mutagens, genetic drift.

Others aren't directly imposed but are self-inflicted: systemic global supply-chain collapse risk, global financial system collapse risk, creating potential breeding grounds and vectors for global pandemics, etc.

The risk of exogenous events -- supervolcanos, asteroid impacts, solar storms, nearby supernovae -- aren't affected by human activities. But in the case of endogenous events, we're directly affecting the probabilities.


I think you're right to be sad about it, and I would add "deeply concerned" as well. Our society is at a point where the most dangerous risks are ones that the human mind is pretty bad at reasoning about. That's a big problem, to put it mildly.

Climate change, nuclear weapons, bio threats, and runaway AI all pose immense risks, at scales that are hard to reason about intuitively. Hopefully we develop ways to better manage current and future risks before a big one blows up.


I think the existential threat to humanity is the concentration of power and resources that has happened in the last 40-60 years through technological innovation and the lack of any system technological or other wise to reverse the trend.

Very related: The Precipice - a book by Toby Ord

https://theprecipice.com/

"[Existential risks] have only multiplied, from climate change to engineered pandemics and unaligned artificial intelligence. If we do not act fast to reach a place of safety, it may soon be too late. The Precipice explores the science behind the risks we face."


There are a lot of things I'm more worried about than an asteroid hitting earth, but I rather we develop technology that could avoid it from occurring because if it does happen it could drive us extinct.

You seem unable to comprehend and classify that there are existential threats and that you have to focus on them some but not too much.


None of that is really an existential threat, i.e. capable of wiping out the human race. Those prominent figures are focusing on the risk of losing our status as the dominant species.

Even if that takes 10000 years, it's significantly more serious than being put in jail or failing to get a bank loan.


So the reason you’re saying it poses the greatest threat is we don’t know its motivations, but then you proceed to explain its motivations as evidence of why it is the greatest threat.

Look, I’ve seen sci-fi too and there is a lot of literary pathos to mine there, but the idea that this is all going to happen so fast we won’t know what hit us seems to lack any evidence.

Of course anything with a high enough risk needs adequate cautions, it’s just I haven’t seen any evidence that given the energy and bandwidth requirements that there’s some sort of singularity we’re blindly stumbling to.

Just don’t get to that point. One that’s decades if not centuries away.

Meanwhile, effective altruism® could return some donations and clean up some nuclear weapons or carbon instead of just doing whatever sounds best to people who read Malcolm Gladwell books and listen to podcasts.


According to the article, in longtermism 'existential risk' is something more specific: everything that threatens our long term 'potential'.

So if becoming 'a multi planetary species' as Musk puts it is an essential part of our potential, destroying in whatever way our capability of achieving that is putting us at an existential risk. Not because we might all die on this planet, but just for the very reasons that we stay stuck here at the limits of earth.


How shortsighted. Humanity faces many existential threats and it is completely and utterly irresponsible to not even try to mitigate them.

I am happy there are many people (and a whole community of Effective Altruists and academics) who are concerned about Existential Risk (X Risk / Global Catastrophic Risk) and are working on ways to reduce it.

https://futureoflife.org/background/existential-risk/

https://en.wikipedia.org/wiki/Global_catastrophic_risk

next

Legal | privacy