Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Toby Ord talks about this very problem in his book "The Precipice: Existential Risk and the Future of Humanity" [1] -- I totally recommend it if you are into this subject, although I have to admit I am pretty pessimistic after reading it, as it seems that the only way to avoid existential risk is by preparing through international collaboration at a global level. Precisely what I do not think will happen any time soon in the current climate.

[1] - https://www.amazon.com/Precipice-Existential-Risk-Future-Hum...



sort by: page size:

Very related: The Precipice - a book by Toby Ord

https://theprecipice.com/

"[Existential risks] have only multiplied, from climate change to engineered pandemics and unaligned artificial intelligence. If we do not act fast to reach a place of safety, it may soon be too late. The Precipice explores the science behind the risks we face."


The irony is that many people who really believe in the potential of humanity to create huge futures full of happiness and bliss are the ones who are most concerned about our survival. There are obviously different opinions on the degree of our survival chances but reasonable people have put it as low as 1/6. That means a f*cking dice roll could put a stop to all the progress you have been lauding about. Don’t get me wrong... I am an optimist, I want to creat a great future and lift the potential that is there. I would wager I probably see more potential than most. BUT we will have to radically transform how our societies operate, how we care about each other (including non-human animals), how we care about our planet, and what role we see ourselves having in the future.

Have a look at the book „The Precipice“ by Toby Ord for more details: https://en.wikipedia.org/wiki/The_Precipice:_Existential_Ris...


We may always face existential risks. We now know of some concerning the next 50-100 years, but in 100 years we may know of different ones. Planetary redundancy conceivably aids our chances with those risks as well, so is a project best begun now, whatever the lead time. I don't want us always just solving the present crisis. We should think about the distant future too.

I agree; if current levels of existential risk aren't enough to unite humanity, I don't think anything survivable will do so.

I feel there's a much higher chance that humanity will drastically change its own fate (for better or worse) in the next hundred years, and any forecasts beyond that have very wide error bars. Artificial intelligence is the big one (it ends the current era of "business as usual" no matter whether it's friendly or not), but there's also nanotech, superviruses coming from desktop bio-hackery, mind uploading, good old nuclear terrorism, etc. For "business as usual" to continue and things like climate change to stay relevant, we need to dodge all of the above, which is difficult.

For a more thoughtful take on the future of humanity, Google for the keywords "existential risk". Bostrom's writeup is a good start: http://www.nickbostrom.com/existential/risks.html


There’s a good book called “The Precipice” (2020) by Toby Ord that suggests things people can do: https://en.wikipedia.org/wiki/The_Precipice:_Existential_Ris...

For other types of existential risks we are either in just in along for the ride (wide availability of nuclear weapons) lack data or concrete means of tackling the problem (AI, solar flares and whatnot) or it's something already linked to climate change problem management (decreased resources, overpopulation). The greying of the population is probably the second biggest story other than the climate.

For human rights, as tragic as they are the scale is just tiny in comparison. And the biggest boon for those rights is a functioning and wealthy society. Remove that with climate change and demographic troubles and they will be gone in a puff of smoke.


The existential crisis you are talking about is happening right here, right now - and nothing meaningful is being done about it. The correct course of action is to reduce further damage, and not count on our ability to tech our way out of this in a further degraded environment decades from now.

I keep saying "you don't grasp" the scale here because really you don't. Myself and others have explained how enormous this undertaking would be, but you haven't given any kind of solution to the manifold problems we outlined.


Existential risk = extinction of humanity.

I'm not sure how there are degrees of extinction of humanity that are bad.


I think existential risks to our species of much higher magnitude exist today. Climate change and pollution for instance. I think a "possible existential threat in the future" is a weaker case for philanthropy than existing ones.

We are currently living in a very important time in humanity's history that the philosopher Toby Ord calls The Precipice. We likely have a 1 out of 6 chance over the next century to end humanity (all human life is destroyed), so this century we absolutely must decrease the chances of existential threats to humanity or perish. This should be humanity's #1 priority, yet at the moment we spend less on it than we do on ice cream.

https://theprecipice.com/

Please read this book, which is in my opinion the most important book written this century.


Yes, salient risks seem more threatening than they actually are, but for a similar reason, invisible risks are more threatening than they seem. Which is why the future does not look as bright as this article is claiming.

In the last year or so I have become interested in the study of existential risks -- low-probability events which could extinguish humanity as we know it. Things like catastrophic nuclear war, biotechnology or nanotechnology overrun, artificial intelligence overrun, supervolcano explosions and asteroid impacts. There are few people researching these things, despite the huge potential downside to not researching them, because the risks aren't things that our amygdala responds to.

If you're interested in this stuff, there's much work to be done - check out the Lifeboat Foundation, Singularity Institute and Future of Humanity Institute.


So what is your suggestion? How do we avoid this fast approaching existential threat to humanity?

According to the article, in longtermism 'existential risk' is something more specific: everything that threatens our long term 'potential'.

So if becoming 'a multi planetary species' as Musk puts it is an essential part of our potential, destroying in whatever way our capability of achieving that is putting us at an existential risk. Not because we might all die on this planet, but just for the very reasons that we stay stuck here at the limits of earth.


I am happy there are many people (and a whole community of Effective Altruists and academics) who are concerned about Existential Risk (X Risk / Global Catastrophic Risk) and are working on ways to reduce it.

https://futureoflife.org/background/existential-risk/

https://en.wikipedia.org/wiki/Global_catastrophic_risk


It is certainly a calamity. The extent to which the threat is existential is very much open to debate, however. While doing nothing and letting it happen is a stupid and likely more costly choice in the long run, the idea of it being "existential" sounds like hyperbole to me. I suspect we'll adapt with moderate difficulty.

Existential threat means things can’t get worse (you just stop existing).

Ignoring the potential of geoengineering implies you are sure that we have a better solution which will work. I hope we don’t have to use geoengineering but I am certain the status quo of hoping renewables and batteries in the 1st world will save us is not going to work.


Whether or not the existential risk around [climate change or nuclear weapons] is real, governments, corporations, NGO's and others will twist this to their own purpose to build censorship, entry barriers, and other unpleasantries. We should be mindful of that.

Well the worst-case is that it "succeeds" in which case we dump a lot of money that could be better spent on seriously addressing climate change. Instead we construct new security hazards to rear their heads during whatever societal disintegration we face as climate change, the rising tide of fascism, economic mismanagement, and the lawless global system of competing blood-thirsty imperial blocs, converge down to a point.

I worry that if humanity survives all this, a thousand years from now, as new civilisations form out of the ashes, what problems they're still going to be left with.

next

Legal | privacy