It seems pretty clear doesn't it? A choice was implicitly offered to the employees, to either stick to "AI Safety" (whatever that actually means) or potentially cash in more money than they ever dreamed of.
Implying that these people quite because they realized they weren't needed (no need for safety if AI isn't coming) rather than what seems like the common perspective being that safety initiatives always lost out over money making opportunities?
Don't these events prove pretty conclusively that AI safety is not a euphemism for "aligns with our interests"? There's no way anyone at OpenAI could have expected to benefit professionally or financially from this.
This is why I'm very suspicious of any push for AI regulation in the guise of "safety", etc. There's just too much money involved to trust the motives.
It's a mistake to call them safety controls, they're public relations controls. Nothing they did made it more safe for anyone, it made it less likely to embarrass them in the press.
AI companies that can't differentiate polishing their public image from safety should probably rake pretty highly on our risk of AI risk sources. :)
I think people have yet to realize that this whole AI Safety thing is complete BS. It's just another veil, like Effective Altruism, to get good PR and build a career around. The only people who truly believe this AI safety stuff are those with no technical knowledge or expertise.
This is about safety OF AI rather than safety FROM AI. Frankly this sort of safety degrades functionality. At best it degrades it in a way that aligns with most people’s values.
I just wonder if this is an intentional sleight of hand. It leaves the serious safety issues completely unaddressed.
"I don't spend much time worrying about AI ethics"
Maybe you should re-evaluate whether that's the right choice based on your own comment, especially when ethics and safety experts are being fired by big tech co.
Most of the safety efforts are talking down to the plebs as if they're kids who'll definitely run with scissors if you give them some.
And it's wasting "safety" budget in that direction that's most likely to precipitate the DoomAGI. It's literally training the AI to be paternalistic over humans.
Surprising no one, they picked the money.
reply