Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

That is not what the "AI safety ninnies" are worried about. The "AI safety ninnies" aren't all corporate lobbyists with ulterior motives.


sort by: page size:

Yes well, then your concern is not AI safety.

Most of those touting "safety" do not want to limit their access to and control of powerfull AI, just yours .

I'm not worried about the AI, I'm worried about the people and companies that have it.

Let's not get started on "AI safety"...

> Modern AI is pretty harmless though, so it doesn't matter yet.

Yes, that's why the only thing people flipping out about "safety" of making them public achieve is making public distrustful about AI safety.


Yes, "AI Safety" really means safety for the reputation of the corporation making it available.

AI safety is not about the jobs AI replaces. If you think that's the context of this discussion, you are way off track.

Yes. The AI safety that companies care about is their brand safety. They don’t want their products going Microsoft Tay on their customers.

That's not what people mean by AI safety - they're referring to the dangers of uncontrollable or runaway AI. Particularly AGI.

It's easier to not understand the risks. It takes extra effort to understand AI safety problems and more effort again to solve them. People who blow off AI safety are being blithe and not engaging the arguments.

I suspect a good chunk of "AI safety people" are just orgs propped up by the big corps to protect their interest in AI and reduce competition.

Saying "superhuman AI is not an existential risk" isn't the same as not caring about safety. It's a coherent assessment from someone working in the field that you may or may not agree with.

AI Safety is real and very important... AI Safety financed and practiced by corps looking to make money off of AI is mostly bullshit.

I think it's important to make a distinction between the two.


“Safety” in the context of AI has nothing to do with actual safety. It’s about maximizing banality to avoid the slightest risk to corporate brand reputation.

I'm very curious what definition of safety they were using. Maybe that's where the disagreement lies.

AI safety of the form "it doesn't try to kill us" is a very difficult but very important problem to be solved.

AI safety that consists of "it doesn't share true but politically incorrect information (without a lot of coaxing)" is all I've seen as an end user, and I don't consider it important.


I'm sure there are many well meaning ai safety researchers, but I also see a lott of "ai for me but not for thee' moat digging safety hypocrisy.

The idea of "safety" by self-restraint was always ludicrous. It would disintegrate like a snowflake under a blowtorch in pursuit of profit.

Mind you, the risk of "AI" acting on its own is massively exaggerated. It's AI-wielding humans who are the real unalignable threat.


Maybe the safety concerns are from a vocal minority and most are quiet and don't think much about or don't actually think ai is really that close. It could just be hysterical people or people who get traffic from outrageous things

The fact they aren't concerned is exactly the problem. They should be. And in fact many are. See the SSC link posted below (http://slatestarcodex.com/2015/05/22/ai-researchers-on-ai-ri...), or this survey (http://www.nickbostrom.com/papers/survey.pdf). Certainly I am worried about it. There is far from "uniform agreement" that AI is safe. And appeals to authority would not comforting even if there was an authority to appeal to.
next

Legal | privacy