The problem, though, is without the huge commercial and societal success of ChatGPT, the AI Safety camp had no real leverage over the direction of AI advancement worldwide.
I mean, there are tons of think tanks, advocacy organizations, etc. that write lots of AI safety papers that nobody reads. I'm kind of piqued at the OpenAI board not because I think they had the wrong intentions, but because they failed to see that the "perfect is the enemy of the good."
That is, the board should have realistically known that there will be a huge arms race for AI dominance. Some would say that's capitalism - I say that's just human nature. So the board of OpenAI was in a unique position to help guide AI advancement in as safe a manner as possible because they had the most advanced AI system. They may have thought Altman was pushing too hard on the commercial side, but there are a million better ways they could have fought for AI safety without causing the ruckus they did. Now I fear that the "pure AI researchers" on the side of AI Safety within OpenAI (as that is what was being widely reported) will be even more diminished/sidelined. It really feels like this was a colossally sad own goal.
I was hopeful for a private-industry approach to AI safety, but it looks unlikely now, and due to the slow pace of state investment in public AI R&D, all approaches to AI safety look unlikely now.
Safety research on toy models will continue to provide developments, but the industry expectation appears to be that emergent properties puts a low ceiling on what can be learned about safety without researching on cutting edge models.
Altman touted the governance structure of OpenAI as a mechanism for ensuring the organisation's prioritisation of safety, but the reports of internal reallocation away from safety towards keeping ChatGPT running under load concern me. Now the board has demonstrated that it was technically capable but insufficiently powerful to keep these interests in line, it seems unclear how any safety-oriented organisation, including Anthropic, could avoid the accelerationist influence of funders.
Don't these events prove pretty conclusively that AI safety is not a euphemism for "aligns with our interests"? There's no way anyone at OpenAI could have expected to benefit professionally or financially from this.
Lets not focus on "the business" and instead focus on the safety.
Altman can an ulterior motive, but it doesn't mean that we should strive for having some sort of handle on AI safety.
It could be that Altman and OpenAI know exactly how this will look and the backlash that will ensue that we get ZERO oversight and we rush headlong into doom.
Short term we need to focus on the structural unemployment that is about to hit us. As the AI labs use AI to make better AI, it will eat all the jobs until we have a relative handful of AI whisperers.
At the end of the day, we still don't know what exactly happened and probably, never will. However, it seems clear there was a rift between Rapid Commercialization (Team Sam) and Upholding the Original Principles (Team Helen/Ilya). I think the tensions were brewing for quite a while, as it's evident from an article written even before GPT-3 [1].
> Over time, it has allowed a fierce competitiveness and mounting pressure for ever more funding to erode its founding ideals of transparency, openness, and collaboration
Team Helen acted in panic, but they believed they would win since they were upholding the principles the org was founded on. But they never had a chance. I think only a minority of the general public truly cares about AI Safety, the rest are happy seeing ChatGPT helping with their homework. I know it's easy to ridicule the sheer stupidity the board acted with (and justifiably so), but take a moment to think of the other side. If you truly believed that Superhuman AI was near, and it could act with malice, won't you try to slow things down a bit?
Honestly, I myself can't take the threat seriously. But, I do want to understand it more deeply than before. Maybe, it isn't without substance as I thought it to be. Hopefully, there won't be a day when Team Helen gets to say, "This is exactly what we wanted to prevent."
Altman was pushing that narrative because he’s a ladder kicker.
He doesn’t give a shit about “safety”. He just wants regulation that will make it much harder for new AI upstarts to reach or even surpass the level of OpenAI’s success, thereby cementing OpenAI’s dominance in the market for a very long time, perhaps forever.
He’s using a moral high ground as a cover for more selfish objectives, beware of this tactic in the real world.
> Just see OpenAI today: safety vs profit, who wins?
Safety pretty clearly won the board fight. OpenAI started the year with 9 board members, and end it with 4, 4 of the 5 who left being interested in commercialization. Half of the current board members are also on the board of GovAI, dedicated to AI safety.
Don't forget that many people would consider "responsible AI" to mean "no AI until X-risk is zero", and that any non-safety research at all is irresponsible. Particularly if any of it is made public.
I think what's already obvious is that most people in AI research care only for solving some hard problems in computer science. Both AI safety and ethics[0] be damned. So the amount of control OpenAI had on the research arm is less than you'd think.
[0] For the record, AI safety is the branch of research worried about controlling computer God, while AI ethics is the branch of research worried about whether or not it's OK to pay Pakistani kids pennies to label training data.
It becoming more and more clear that for "Open"AI the whole "AI-safety/alignment" thing has been a PR-stunt to attract workers, cover the actual current issues with AI (eg stealing data, use for producing cheap junk, hallucinations and societal impact), and build rapport in the AI scene and politics. Now that they have reached a real product and have a strong position in AI development, they could not care less about these things. Those who -naively- believed in the "existential risk" PR stunt and were working on that are now discarded.
It's important to remember that part of OpenAI's mission, apart from developing safe AGI, is to "avoid undue concentration of power".
This is crucial, because safety is a convenient pretext for people whose real motivations are elitist. It's not that they want to slow down AI development; it's that they want to keep it restricted to a tight inner circle.
We have at least as much to fear from elites who think only megacorps and governments are responsible enough to be given access as we do from AI itself. The failures of our elite class have been demonstrated again and again in recent decades--from the Iraq War to Covid. These people cannot be trusted as stewards, and we can't afford to see such potentially transformative technology employed to further ossify and insulate the existing power structure.
OpenAI had the right idea. Keep the source closed if you must, and invest heavily on safety, but make the benefits available to all. The last thing we need is an AI priesthood that will inevitably turn corrupt.
AI safety is barely even a tangible thing to measure like that. It's mostly just fears and a lose set of ideas for a hypothetical future AGI that we're not even close to.
So far OpenAI's "controls" it's just increasingly expanding the list of no-no things topics and some philosophy work around iRobot type rules. They also slow walked the release of GPT because of fears of misinformation, spam, and deepfakey stuff that never really materialized.
Most proposals for safety is just "slowing development" of mostly LLMs, calls for vague gov regulation, or hand wringing over commercialization. The commercialization thing is most controversial because OpenAI claimed to be open and non-profit. But even with that the correlation between less-commercialization == more safety is not clear, other than prioritizing what OpenAI's team spends their time doing. Which again is hard to tangibly measure what that realistically means for 'safety' in the near term.
What I think is funny is how the whole "we're just doing this to make sure AI is safe" meme breaks down, if you have OpenAI, Anthropic, and Altman AI all competing, which seems likely now.
Do you really need all 3? Is each one going to claim that they're the only ones who can develop AGI safely?
Since Sam left, now OpenAI is unsafe? But I thought they were the safe ones, and he was being reckless.
Or is Sam just going to abandon the pretense, competing Google- and Microsoft-style? e.g. doing placement deals, attracting eyeballs, and crushing the competition.
CEOs and Investors are realising that the interest of AI safety teams mostly aren’t aligned with corporate goals. They were good for press as long as AGI was a distant dream but now as the AI wars start over the big tech companies, there is no need for self-sabotage. The drama at openai was likely the death sentence for serious backing of those teams.
It's interesting the paper is selling Anthropic's approach to 'safety' as the correct approach when they just launched a new version of Claude and HN thread is littered with people saying its unusable because half the prompts they type get flagged as ethical violations.
It's pretty clear some legitimate concerns about a hypothetical future AGI, that we've barely scraped the surface of, turns into "what can we do today" and it's largely virtue signalling type behaviour crippling a non-AGI very very alpha version of LLMs just to show you care about hypothetical future risks.
Even the coorelation between commercialization and AI safety is pretty tenuous. Unless I missed some good argument about how having a GPT store makes AGI destroying the world easier.
It can probably best be summarized as Helen Toner simply wants OpenAI to die for humanity's sake. Everything else is just minor detail.
> Over the weekend, Altman’s old executive team pushed the board to reinstate him—telling directors that their actions could trigger the company’s collapse.
> “That would actually be consistent with the mission,” replied board member Helen Toner, a director at a Washington policy research organization who joined the board two years ago.
I see a huge disconnect in the conversation about safety vs the actual ability of AI currently.
The conversations about safety regarding AGI seem entirely hypothetical at this point. AGI is still so far away, I don't see how it's relevant to OpenAI at the moment.
Whereas safety with respect to ChatGPT... no, I'm not particularly concerned. It can't really tell you anything that isn't on the internet already, and as long as they continue to put reasonable guardrails around its behavior, LLM's don't seem particularly threatening.
Honestly I'm far more worried about traditional viral political disinformation produced by humans, spread through social media.
In other words, it's distribution of malicious content that continues to worry me. Not its generation.
That's fine, but the OpenAI founders (including Sam Altman![1]) explicitly had in mind "AI kills everyone" as one of the main problems they were attempting to tackle when they started the organization. (Obviously starting OpenAI as a solution to this problem was a terrible mistake, but given that they did so, having a board charged with pulling the plug is better than not having that.)
I wonder how much this is connected to the "effective altruism" movement which seems to project this idea that the "ends justify the means" in a very complex matter, where it suggests such badly formulated ideas like "If we invest in oil companies, we can use that investment to fight climate change".
I'd sayu the AI safety problem as a whole is similar to the safety problem of eugenics: Just because you know what the "goal" of some isolated system is, that does not mean you know what the outcome is of implementing that goal on a broad scale.
So OpenAI has the same problem: They definitely know what the goal is, but they're not prepared _in any meaningful sense_ for what the broadscale outcome is.
If you really care about AI safety, you'd be putting it under government control as utility, like everything else.
It’d be another story if Altman were one of the tech influencers who goes around saying that AI isn’t dangerous at all and you’re crazy if you have concerns. But he consigned the human extinction letter! And 20% of OpenAI compute was reportedly allocated to Sutskever’s superalignment team (https://openai.com/blog/introducing-superalignment). From what we know, it’s hard to see how this action was supposed to advance AI safety in any meaningful way.
It's to do with a tribe in openAI that believes ai will take over the world in the next 10 years so we need to spend much of our efforts towards that goal. What that translates to is strong prompt censorship and automated tools to ban those who keep asking things we don't want you to ask.
Sam has been agreeing with this group and using this as the reason to go commercial to provide funding for that goal. The problem is these new products are coming too fast and taking resources which affects the resources they can use for safety training.
This group never wanted to release chatGPT but were forced to because a rival company made up of ex openAI employees were going to release their own version. To the safety group things have been getting worse since that release.
Sam is smart enough to use the safety group's fear against them. They finally clued in.
OpenAI never wanted to give us chatGPT. Their hands were forced by a rival and Sam and the board made a decision that brought in the next breakthrough. From that point things snowballed. Sam knew he needed to run before bigger players moved in. It became too obvious after devday that the safety team would never be able to catch up and they pulled the breaks.
OpenAI's vision of a safe AI has turned into a vision of human censorship rather than protecting society from a rogue AI with the power to harm.
I mean, there are tons of think tanks, advocacy organizations, etc. that write lots of AI safety papers that nobody reads. I'm kind of piqued at the OpenAI board not because I think they had the wrong intentions, but because they failed to see that the "perfect is the enemy of the good."
That is, the board should have realistically known that there will be a huge arms race for AI dominance. Some would say that's capitalism - I say that's just human nature. So the board of OpenAI was in a unique position to help guide AI advancement in as safe a manner as possible because they had the most advanced AI system. They may have thought Altman was pushing too hard on the commercial side, but there are a million better ways they could have fought for AI safety without causing the ruckus they did. Now I fear that the "pure AI researchers" on the side of AI Safety within OpenAI (as that is what was being widely reported) will be even more diminished/sidelined. It really feels like this was a colossally sad own goal.
reply