Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

It seems pretty clear doesn't it? A choice was implicitly offered to the employees, to either stick to "AI Safety" (whatever that actually means) or potentially cash in more money than they ever dreamed of.

Surprising no one, they picked the money.



sort by: page size:

Implying that these people quite because they realized they weren't needed (no need for safety if AI isn't coming) rather than what seems like the common perspective being that safety initiatives always lost out over money making opportunities?

Don't these events prove pretty conclusively that AI safety is not a euphemism for "aligns with our interests"? There's no way anyone at OpenAI could have expected to benefit professionally or financially from this.

I love how they are using internal AI tools as a means to lay-off huge numbers of people, but won’t make that particular image.

“AI Safety” is a farce.


Their goal is to research AI safety. To advance AI safety knowledge. Making money is just a necessity evil. (I am serious)

This is why I'm very suspicious of any push for AI regulation in the guise of "safety", etc. There's just too much money involved to trust the motives.

It sounds like one of the board members is an "AI safety" person, so it's not that crazy.

Yes, "AI Safety" really means safety for the reputation of the corporation making it available.

Or--wait for it--they care more about money and/or fame than about AI safety.

It has never been about AI safety. They just threw their little mission statement on the end like an email signature.

Nothing could possibly weaken their position more than the last 72 hours


Sounds like AI Safety is just HR but for AI. Ostensively for the benefit of AI but really for the benefit of the company.

It's a mistake to call them safety controls, they're public relations controls. Nothing they did made it more safe for anyone, it made it less likely to embarrass them in the press.

AI companies that can't differentiate polishing their public image from safety should probably rake pretty highly on our risk of AI risk sources. :)


AI safety is a fraud on similar level as NFTs. Massive virtue signalling.

The idea of "safety" by self-restraint was always ludicrous. It would disintegrate like a snowflake under a blowtorch in pursuit of profit.

Mind you, the risk of "AI" acting on its own is massively exaggerated. It's AI-wielding humans who are the real unalignable threat.


Open AI more like Closed AI

Safety has nothing to do with it. It's an easy tack on for them because of popular fear of AGI.

It's all about power over the market.

Cringe.


I think people have yet to realize that this whole AI Safety thing is complete BS. It's just another veil, like Effective Altruism, to get good PR and build a career around. The only people who truly believe this AI safety stuff are those with no technical knowledge or expertise.

This is about safety OF AI rather than safety FROM AI. Frankly this sort of safety degrades functionality. At best it degrades it in a way that aligns with most people’s values.

I just wonder if this is an intentional sleight of hand. It leaves the serious safety issues completely unaddressed.


"I don't spend much time worrying about AI ethics"

Maybe you should re-evaluate whether that's the right choice based on your own comment, especially when ethics and safety experts are being fired by big tech co.


Most of the safety efforts are talking down to the plebs as if they're kids who'll definitely run with scissors if you give them some.

And it's wasting "safety" budget in that direction that's most likely to precipitate the DoomAGI. It's literally training the AI to be paternalistic over humans.


AI safety is just a rent seeking cult/circus, leeching on the work done by others. Good on Meta for cleaning shop.
next

Legal | privacy