It's easier to not understand the risks. It takes extra effort to understand AI safety problems and more effort again to solve them. People who blow off AI safety are being blithe and not engaging the arguments.
Saying "superhuman AI is not an existential risk" isn't the same as not caring about safety. It's a coherent assessment from someone working in the field that you may or may not agree with.
“Safety” in the context of AI has nothing to do with actual safety. It’s about maximizing banality to avoid the slightest risk to corporate brand reputation.
I'm very curious what definition of safety they were using. Maybe that's where the disagreement lies.
AI safety of the form "it doesn't try to kill us" is a very difficult but very important problem to be solved.
AI safety that consists of "it doesn't share true but politically incorrect information (without a lot of coaxing)" is all I've seen as an end user, and I don't consider it important.
Maybe the safety concerns are from a vocal minority and most are quiet and don't think much about or don't actually think ai is really that close. It could just be hysterical people or people who get traffic from outrageous things
The fact they aren't concerned is exactly the problem. They should be. And in fact many are. See the SSC link posted below (http://slatestarcodex.com/2015/05/22/ai-researchers-on-ai-ri...), or this survey (http://www.nickbostrom.com/papers/survey.pdf). Certainly I am worried about it. There is far from "uniform agreement" that AI is safe. And appeals to authority would not comforting even if there was an authority to appeal to.
reply