Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

> I have yet to meet a serious AI researcher who worries about AI ending the human race.

As an industry practitioner of machine learning / data science, I believe AGI poses a genuine risk to humanity.

Having said that, what I do for a living is of little relevance. People are pretty terrible at predicting the future (see late 19th / early 20th century predictions of year 2000, with food replaced by pills, etc). Unless someone has put enough thought and research into it, their predictions about the future of civilization are likely to be worthless regardless of their academic credentials.



sort by: page size:

> unless you believe that AI can't possibly pose an existential risk within the next couple decades or so.

I believe that AI can't possibly pose an existential risk in the next decade or two. I believe AI poses a great risk, but an economic one, and not an existential one.

> Actually I'd love to know what your estimate actually is for AI becoming an existential threat

My estimate is: never. At least not in the form of some superintelligent AGI.


> I have yet to meet a serious AI researcher who worries about AI ending the human race. At every AI conference I've been to lately, [the answer was always] "dude, stop it. That's such a distraction".

Sure, it's a distraction at AI conferences. It's a distraction from the daily, monthly, yearly even, work of AI.

AI researchers have a million problems to solve in the near term, which are hard and interesting. Speculating about the farther future is perhaps fun to do, but doesn't get papers published, and doesn't help with any of the practical problems AI researchers have today.

That doesn't mean it isn't worth thinking about the long-term dangers of AI. Someone should, as while the danger might be unlikely to happen soon, but if it does eventually happen, it could end us. The same is true of a human-engineered pandemic virus - hopefully unlikely, but someone should be preparing us. We have to handle some worst-cases.

AI researchers are not necessarily the people interested to think about the long-term dangers of AI, because they focus on the field as it stands today, not where it could be in a generation. Of course, their input is crucial to making specific guesses about where the field is going. But calculating the dangers of AI is not just a matter for AI researchers, just like the benefits of medicine is not just a purely medical issue (the cost and availability of medicine play a huge part in how effective medicine is, at a societal level, and are things not under the control of doctors).


>As it stands now, the future without AI seems pretty doomed too.

As pessimistic as the current zeitgeist is, very few imagined futures reasonably result in actual human extinction.


> AGI is the single biggest existential threat on the horizon

> There is zero possibility of surviving AGI proliferation

You're spreading unsubstantiated FUD. In another post from your account you suggest it's unethical to have children because AGI will be so bad.

The vast majority of experts do not support any of these beliefs. Most experts believe we are not anywhere near close to AGI, and/or that we are missing fundamental components required to create it. Even when/if we do create it, most organizations recognize AI safety and policy as an important area that is actively worked on already.

If you want to be concerned about AI, be concerned about military weapons technology, unethical profiling and tracking, or methods for invading privacy. These are concerns that actually have a basis in real technology.


> I would argue this person has no business working with AI with this sort of myopic thinking.

What kind of thinking would you suggest would permit someone to 'work with AI'? Should they be quaking in their boots before being allowed to work at the altar?

You could argue (and with substantially more basis in fact) that Twitter and Facebook have elevated each individuals utterings to broadcast status and that even without decentralized AI networks all the effects that you are listing are already present in the modern world.

It just takes a little bit more work but there are useful idiots aplenty.

Your SkyNet like future need not happen at all, what we can imagine has no bearing on what is possible today and that seems to me to be a much more relevant discussion.

> AI is going to lead to the extinction of all human life on this planet and this is coming from someone who generally disregards conspiracy theories as paranoid fear mongering.

Well, you don't seem to be able to resist this particular one. So your 'generally' may be less general than you think it is.

> We have every reason to be afraid.

No, we don't. I haven't seen a computer that I couldn't power off yet, SF movies to the contrary.


> General AI for now is science fiction. Perhaps this is unfortunate. I wouldn't mind an AI that can replace humans, even if I too am made obsolete with it.

Maybe I’m optimistic, but I feel like we need AGI to reach the next level of development as a civilization. If software engineering jobs are the price to pay so be it. World hunger, medical science, energy, space travel, if we can get all of these to take a ride on something resembling Moore’s Law we are in for a one hell of a fantastic future in our lifetimes.


> AI has risks, but in my honest to god opinion I cannot take anyone seriously who says, without any irony, that A.I poses a legitimate risk to human life such that we would go extinct in the near future.

You are probably thinking of AI as some idea of a complete autonomous being as you say that, but what about when 'simply' used as a tool by humans?


> The whole threat of AGI is so overblown

Do you think AGI is impossible? Seems pretty possible to me. That it is far away? We have no idea. We don’t need many more breakthroughs on the level of stacking transformers to make an llm smarter than humans for most tasks.

Or do you think having an ai model that’s smarter than humans poses no risk to society / humanity? It’s essentially unlimited capacity for thought. Free, smart employees. Doing whatever you want. Or whatever they want, if they’re smarter than you and have an agenda. “ChatGPT program a search engine that’s better than Google”. “ChatGPT I am a dictator. Help me figure out how to stay in power forever”. “ChatGPT figure out how to bootstrap a manufacturing process that can build arbitrary machines. Then design and build a factory for making nuclear weapons / CO2 capture / houses / rockets to space”.

Things have the potential to be very interesting in the next few decades.


> There has long been discussion among computer scientists about the danger posed by highly intelligent machines, for instance if they might decide that the destruction of humanity was in their interest.

This AI doomer stuff is such nonsense and I can't believe anybody takes it seriously. As if it's OpenAI's responsibility to save humanity from the pitfalls of AI.

Imagine if we decided to improve our education system and doomers were talking about "hitting the panic button" because students were getting too smart from all the quality education.


>The idea that artificial intelligence could lead to the extinction of humanity is a lie

But it's not. Probably AI will happen and get smarter than us. And then all it takes is one to go Hitler/Stalin like, take over and decide to do for us. I fail to see how any of that is impossible.

However it's not happening for a while so probably regulations are not needed at the moment. Maybe wait till we have AGI?


> But there are plenty of people who do believe that AI either will or might kill all of humanity, and they take this idea very seriously. They don’t just think “AI could take our jobs” or “AI could accidentally cause a big disaster” or “AI will be bad for the environment/capitalism/copyright/etc”. They think that AI is advancing so fast that pretty soon we’re going to create a godlike artificial intelligence which will really, truly kill every single human on the planet in service of some inscrutable AI goal. These folks exist.

Yes, these people exist, but they are so few that this argument is essentially a strawman. "AI could take our jobs" is a valid concern, and it is something that our political structures are wholly incapable of dealing with.

Most "AI Doomers" aren't worried about Skynet, they're worried about 90% unemployment.


>So I can't see the idea that AI will be safe by default other than wishful thinking.

I... don't think anyone is arguing that AI is safe in any way? I mean, the pattern matching tools we call AI now are already deployed in very dangerous weapons.

My point is not that AI is going to be safe and warm and fuzzy, or even that it won't be an existential threat. My point is that we're already facing several existential threats that don't require additional technological development to destroy society, and because the existing threats don't require postulating entirely new classes of compute machinery, we should probably focus on them first.

There are still enough nuclear weapons in the world for a global war to end society and most of us. A conflict like we saw in the first half of the 20th century using modern weapons would kill nearly all of us, and we are seeing a global resurgence of early 20th century political ideas. I think this is the biggest danger to the continuation of humanity right now.

We haven't killed ourselves yet... but the danger is still there. we still have apocalyptic weapons aimed and ready to go at a moment's notice. We still face the danger of those weapons becoming cheaper and easier to produce as we advance industrially.


> AGI is a potential successor species

Only if we let it be.

Oh who am I kidding. Someone either out of blind idealism, pure hubris, unfettered greed, or some combination of the aforementioned will probably go as far as technology allows them and create machines with a survival instinct that will compete with humans for resources effectively creating an enemy where there was once none.

Useful idiots will anthropomorphize machines, demand rights for them, … etc. and make attempts to eliminate them - so we can get back on top of the food chain - that much harder.

But maybe I’m just off my rocker. I’m tired.

Edit: The only people working at the cutting edge of AI now are for-profit companies who would screw over society if it made them a couple more bucks. It’s hard not to be pessimistic that any advancement would not be used to screw the common man over.


>If AI becomes an incredible tool, it will an incredibly bad one, used mainly for replacing humans in most jobs and removing our reliance on each other, promoting narcissism and selfish behaviour.

Ridiculous, you have absolutely no way of knowing how this will all play out. I'm certain AI could create a lot of instability in the job market, but there's no reason to think our economy won't evolve as it always has in the past to match.


> What are you basing this on?

Many conversations with AI doomers. They gloss over and make assumptions about intelligence that aren't really backed by priors and when this is pointed out they hand wave and say "but computer".

> Read Bostrom's Superintelligence for example, or Yudkowsky's Intelligence Explosion Microeconomics.

I don't really have any interest in doing so, and if I'm honest have a particularly unfavorable read of Yudkowsky as a person based on his cultish following.


> You seem to agree that if at some point in the next few decades AI will be something we need to worry about

If you think this than you have misunderstood me. Now go elsewhere and bother other people.


> It boggles my mind how anyone can think otherwise.

Some AI dangers are certainly legitimate - it's easy to foresee how an image recognition system might think all snowboarders are male; or a system trained on unfair sentences handed out to criminals would replicate that unfairness, adding a wrongful veneer of science and objectivity; or a self-driving car trained on data from a country with few mopeds and most pedestrians wearing denim might underperform in a country with many mopeds and few pedestrians wearing denim.

But other AI dangers sound more like the work of philosophers and science fiction authors. The moment people start predicting the end of humans needing to work, or talking about a future evil AI that punishes people who didn't help bring it into existence? That's pretty far down in my list of worries.


> I think that an AI, a "generalized" intelligence with a generalized understanding [...] is certainly far away.

I think this is where we disagree most; GPT-3 and ChatGPT have convinced me that the main difference between human and artificial cognitive capabilities are quantitative now, and unlikely to ever change in our favor...

I do agree with you that it is very difficult to predict when this will switch, and how.

I personally believe that AI with superhuman capability is an inevitable matter of time now, and I also think that the most likely risk to us, as species, is that we slowly become irrelevant and worthless, just like weavers during the industrial revolution, and that this leads to huge problems for our society.

AI completely dominating humankind is a less likely secondary concern IMO, but the potential consequences to our species are unprecedented.


>The only real danger of AI in the next 1000 years is . . .

That's not true.

next

Legal | privacy