Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

> Modern AI is pretty harmless though, so it doesn't matter yet.

Yes, that's why the only thing people flipping out about "safety" of making them public achieve is making public distrustful about AI safety.



sort by: page size:

> We all know all this "AI safety" is for show, right?

No. A lot of people think it really matters

A lot of other people pretend to care about it because it also enables stifling the competition and attempting regulatory capture. But it's not all of them.


He's talking about AI safety.

> Most researchers don't take the LessWrong in-culture talk seriously

Yes but politicians do, for some reason. AI Safety has become a meaningless term, because it is so broad it ranges from "no slurs please" over "diverse skin colors in pictures please" to the completely hypothetical "no extinction plz".


That is not what the "AI safety ninnies" are worried about. The "AI safety ninnies" aren't all corporate lobbyists with ulterior motives.

I'm very curious what definition of safety they were using. Maybe that's where the disagreement lies.

AI safety of the form "it doesn't try to kill us" is a very difficult but very important problem to be solved.

AI safety that consists of "it doesn't share true but politically incorrect information (without a lot of coaxing)" is all I've seen as an end user, and I don't consider it important.


The argument is that you don't need strong AI for it to be dangerous to humans.

> The whole point of AI safety is keeping it away from those parts of the world.

No, it's to ensure it doesn't kill you and everyone you love.


I mean yea, that's the real question. Not where AI is now, but where it will be a decade or two from now. Talk about AI not currently being dangerous is just ignoring the point AI safety people are making.

Yes well, then your concern is not AI safety.

The fact they aren't concerned is exactly the problem. They should be. And in fact many are. See the SSC link posted below (http://slatestarcodex.com/2015/05/22/ai-researchers-on-ai-ri...), or this survey (http://www.nickbostrom.com/papers/survey.pdf). Certainly I am worried about it. There is far from "uniform agreement" that AI is safe. And appeals to authority would not comforting even if there was an authority to appeal to.

>AI safety is moralism of the boring kind

It's not all boring - it makes for some great movies. Terminator 2, The Matrix etc.

It's also moving to practical engineering questions like how can we have AI controlled drones kill invading Russians but ensure they won't turn on us later, more than philosophical waffle.


Just to point it out again: AI safety means safe to our reputation.

AI researchers also have a vested interest in saying their technology is safe.

The idea of "safety" by self-restraint was always ludicrous. It would disintegrate like a snowflake under a blowtorch in pursuit of profit.

Mind you, the risk of "AI" acting on its own is massively exaggerated. It's AI-wielding humans who are the real unalignable threat.


This is about safety OF AI rather than safety FROM AI. Frankly this sort of safety degrades functionality. At best it degrades it in a way that aligns with most people’s values.

I just wonder if this is an intentional sleight of hand. It leaves the serious safety issues completely unaddressed.


i think it's perfectly reasonable to be worried about AI safety, but silly to claim that the thing that will make AIs 'safe' is censoring information that is already publicly available, or content somebody declares obscene. An AI that can't write dirty words is still unsafe.

surely there's more creative and insidious ways that AI can disrupt society than by showing somebody a guide to making a bomb that they can already find on google. blocking that is security theatre on the same level as taking away your nail clippers before you board an airplane.


> we should expect it given how fast AI is going to hurt a lot of these people

It has nothing to do with tangible harm and a lot with the attitude of techno public figures, as well as, in San Francisco, the civic detachment of its tech workers.


There is no such thing as AI safety. AI is far too dangerous. The only thing that exists with regard to AI is "distracting the population so they think AI benefits them" or "AI is too amusing so I don't want to think about the consequences".

LOL, we all know exactly what that means...

> Doesn't threaten the profits of entrenched gigantic megacorps.

> Doesn't threaten the status quo of the megawealthy.

> Doesn't threaten the highly-entrenched people in the government.

AI safety is about making AI "safe" for their pocketbooks.

next

Legal | privacy