> We all know all this "AI safety" is for show, right?
No. A lot of people think it really matters
A lot of other people pretend to care about it because it also enables stifling the competition and attempting regulatory capture. But it's not all of them.
> Most researchers don't take the LessWrong in-culture talk seriously
Yes but politicians do, for some reason. AI Safety has become a meaningless term, because it is so broad it ranges from "no slurs please" over "diverse skin colors in pictures please" to the completely hypothetical "no extinction plz".
I'm very curious what definition of safety they were using. Maybe that's where the disagreement lies.
AI safety of the form "it doesn't try to kill us" is a very difficult but very important problem to be solved.
AI safety that consists of "it doesn't share true but politically incorrect information (without a lot of coaxing)" is all I've seen as an end user, and I don't consider it important.
I mean yea, that's the real question. Not where AI is now, but where it will be a decade or two from now. Talk about AI not currently being dangerous is just ignoring the point AI safety people are making.
The fact they aren't concerned is exactly the problem. They should be. And in fact many are. See the SSC link posted below (http://slatestarcodex.com/2015/05/22/ai-researchers-on-ai-ri...), or this survey (http://www.nickbostrom.com/papers/survey.pdf). Certainly I am worried about it. There is far from "uniform agreement" that AI is safe. And appeals to authority would not comforting even if there was an authority to appeal to.
It's not all boring - it makes for some great movies. Terminator 2, The Matrix etc.
It's also moving to practical engineering questions like how can we have AI controlled drones kill invading Russians but ensure they won't turn on us later, more than philosophical waffle.
This is about safety OF AI rather than safety FROM AI. Frankly this sort of safety degrades functionality. At best it degrades it in a way that aligns with most people’s values.
I just wonder if this is an intentional sleight of hand. It leaves the serious safety issues completely unaddressed.
i think it's perfectly reasonable to be worried about AI safety, but silly to claim that the thing that will make AIs 'safe' is censoring information that is already publicly available, or content somebody declares obscene. An AI that can't write dirty words is still unsafe.
surely there's more creative and insidious ways that AI can disrupt society than by showing somebody a guide to making a bomb that they can already find on google. blocking that is security theatre on the same level as taking away your nail clippers before you board an airplane.
> we should expect it given how fast AI is going to hurt a lot of these people
It has nothing to do with tangible harm and a lot with the attitude of techno public figures, as well as, in San Francisco, the civic detachment of its tech workers.
There is no such thing as AI safety. AI is far too dangerous. The only thing that exists with regard to AI is "distracting the population so they think AI benefits them" or "AI is too amusing so I don't want to think about the consequences".
Yes, that's why the only thing people flipping out about "safety" of making them public achieve is making public distrustful about AI safety.
reply