It seems clear that there weren’t specific reasons, just a kind of final straw in the product announcements that made the board realize how far from the original mission the company had drifted.
Turning it into an emergency and surprise coup with innuendo of wrongdoing looks to have been a huge mistake, and may result in total loss of control where a more measured course correction could have succeeded.
Ilya is the stereotypical genius mind that is extremely passionate yet disconnected from the real world. He got way too worked up about abstract issues, failed to see the bigger picture and had a meltdown that other board members took seriously because he's a cofounder. He is instrumental in the research but he shouldn't be running the business.
He could possibly stay in a pure Chief Scientist role while abdicating his board seat. But if I were a CEO, I'd have a hard time trusting a C-level role to someone whose vision is diametrically opposed to my own.
He’d probably be better off going to Meta or HuggingFace and working on getting open source as close to OpenAI’s offerings as possible. I expect that real innovation (vs. commercialization) is now fully dead at OpenAI, with them instead focusing on ROÍ for Microsoft.
Ilya is not a champion of open source: "We were wrong. Flat out, we were wrong. If you believe, as we do, that at some point, AI — AGI — is going to be extremely, unbelievably potent, then it just does not make sense to open-source. It is a bad idea... I fully expect that in a few years it’s going to be completely obvious to everyone that open-sourcing AI is just not wise."
He said in an interview 2 weeks ago that below a certain capability threshold it is beneficial and good to open source models —- but once you cross that threshold it is a bad idea.
The example he gave is a model that could independently do science.
> He is instrumental in the research but he shouldn't be running the business.
To push back on this a bit. If two yet unknown people, "an Altman" and "an Ilya", both applied to YC to start a company that builds and sells AI models, guess who would get funded. Not the guy who can't build AI models.
I find it bizarre that the guy who can build is suddenly the villain-nerd who can't be trusted, and the salesman is the hero, in this community.
Right, I’ve noticed this trend too. It sucks and I find myself frustrated by how easily people go along with smooth talking sales/business types and allow them to take over positions of power. The truth is that builders can exist and build amazing products without the sales guys, but the sales guys would have absolutely nothing without the builders. I’d even go a step further and say that the best products are built by solo or small teams of impassioned builders and as soon as sales/business types get involved things start tending towards enshittification.
For a very long time I thought Eliezer was one of the least likeable people on the internet, which should be a pretty high mountain to climb. But watching him on the Lex Friedman podcast was really interesting, and putting a face to the writing helped quite a bit in humanizing him. He's obviously neuro-divergent, and obviously very flawed in a number of personality traits, but he's quite intelligent and completely obsessed for more than a decade on a specific topic. I wouldn't just outright dismiss that.
Yeah, it looks like Sutskever was too late in realizing how far down the path of “embrace, extend, extinguish” OpenAI had gone and made a last ditch effort to stop it.
Purely outside perspective, but people’ve been complaining for quite a while that OpenAI seems to have bailed on their original mission. Sure looks like Altman was capitalism’ing the whole thing—maybe not on purpose, but because it’s just the only way he knows to operate—and had kinda half-sold it to Microsoft, which sure is corroborated by folks posting on here expecting MS to now be in a position to forcibly override the nonprofit board’s decisions, and by rumors that in fact that’s what’s going on.
Looks like they were right to boot him, but may have done it way too late, having already de facto lost control due to the direction he’d guided the organization. If he comes out on top, it’ll mean the original OpenAI and its mission is dead, looks like to me, and the board was already cut out months ago but didn’t realize it yet.
Yeah it's interesting to me how many here on HN seem to be taking Sam's side -- I feel like I've noticed HN users in OpenAI threads mentioning how dishonest Sam is.
Sam seems to have a "move fast and break things" approach which would be appropriate for a less critical industry
Does the board not have final control? Why have they agreed (in principle) to step down? I wish more of the reporting around this was specific about who has the power to do what.
My understanding is that, fundamentally, the only power the board _has_ is to fire the CEO. The CEO, not wanting to be fired, is therefore incentivized to manage the board's expectations, which looks a lot like being willing to take direction from the board if you squint a bit.
The problem comes when the situations starts to resemble the line about how, if you owe a bank a billion dollars, you own the bank: if the direction the CEO has taken the company differs enough from the vision of the board, and they've had enough time to develop the company in that direction, they can kinda hold the organization hostage. Yes, the company isn't what the board really wanted it to be, but it's still worth a bajillion dollars: completely unwinding it and starting over is unthinkable, but all the options that include firing the CEO (the only real lever the board has, the foundation of all the decision-making weight that they have, remember) end up looking like that.
If you really believe in the ideology and believe that the continuation of openai is dangerous--shutting down the company completely should be an option you consider
Oh, absolutely, although you'd have to consider what happens to the tech and the people who developed it: it may be better to have the out-of-control genie at least nominally under your control than not.
My guess is it’s hard to say exactly who has the power and where the power comes from. I bet Sam and his side don’t have any direct power, but their power in the negotiation comes from other sources, like the ability of more Sam loyalists to resign, and Microsoft legal threats which don’t have to be legitimate to be effective since they have such powerful lawyers. So on paper the board has all the power, but that doesn’t necessarily translate to the real world.
Power is a fuzzy thing. You can think about power as being distributed across lots of different entities (the board, CEO, senior execs, investors, rank and file employees, etc) with some having more concentrated power (eg the board) than others (eg individual employees). However, if you create a situation (eg lots of employees decide to walk out in support of the ousted CEO) that can aggregate enough power to overcome any other single entity. That seems to be what is happening here.
It does not matter that the board has the legal power to do whatever they want eg fire the CEO. If the investors and key employees that keep the company going walk away, they end up with nothing so they might as well resign and preserve the organization rather than burn the whole thing down.
I'm surprised nobody's leaked the minutes. It all seems very amateurish. I had thought it was secret news of big misconduct, but instead it's sub-amongus plotting.
At this point in time I wouldn't be surprised if there weren't any minutes. I mean, come on: the four people that knew this was going to happen appear to be the only ones unprepared to deal with the fall-out. Missing minutes would be a footnote. And might be a sign that there was stuff in the 'original' minutes that couldn't be aired.
Also its likely that a large number of the news stories coming out right now are PR "plants" by one side or the other to make the firing seem justified or the return a fait accompli. In a coup one of the first tasks is to get public opinion on your side and build momentum for the outcome you want to see.
I'm curious where the rank & file OpenAI employees stand on this, as it seems to me like they will be the ultimate kingmakers. The Reddit thread on Friday made it seem like they supported Ilya - but for all we know, the anonymous Reddit poster might have been Ilya himself.
It indeed did seem like something Ilya would post himself, or at least someone cosplaying as him. Based on what we’ve seen on Twitter from employees it looks like a significant chunk supports Sam.
They're dreaming of being the next version of early googlers, i guess that is inevitable once people start doing math equations that include eleven digit numbers in them
OpenAI's been offering $800K+ compensation for mid-level AI engineers, roughly double what Google offers them, so yeah I suspect a large portion of the recent staff is probably late-stage Googlers who have dreams of being like early Googlers.
All this talk of talent, they just end up with a company full of people driven only by monetary pursuits. How could it ever have worked with the non profit mission
This is the most important quote: "We can say definitively that the board’s decision was not made in response to malfeasance or anything related to our financial, business, safety, or security/privacy practices. This was a breakdown in communication between Sam and the board."
If it were a plant by the other camp how would this make it there? Also the whole article sounds like "You don't want him as a CEO? He is going to get sooo much money, and going to out compete you sooo hard. He is already in talks for his new venture." Which is obviously what Sam's side would like to project.
It’s pretty clearly just a power play by four of the board members. Keep in mind Sam was part of the board and Greg Brockman was chairman of the board, so this was 4 board members ousting 2 other board members. OpenAI execs have already said it wasn’t for wrongdoing.
A majority of the board removing a minority of the board doesn’t seem like a power play to me. If the opposite were somehow achieved, e.g. through persuading one member of the majority to vote against their own interests, and using the chairman’s casting vote, that would be a power play.
Also, the executive who said it wasn’t for malfeasance wasn’t himself on the board and appears to be trying to push for Altman’s return. The board themselves has not yet said there was no malfeasance. To the contrary, they said that Altman had not been completely candid with them, which could very well be the last straw of malfeasance in a pattern of malfeasance which in aggregate reaches a sufficient threshold to justify a firing.
I don’t know whether there was or wasn’t malfeasance, but taking that executive’s word for it seems unwise in this polarized PR war.
I’m not saying that. I think the main people pushing for that were Altman-aligned members of the management, staff, and investor populations, not the board.
The board was considering the requests to bring back Sam because they realized they were handling the situation badly and didn’t want the organization to blow up and fail at its mission, but they refused to resign unless and until suitably mission-aligned replacement board members were agreed upon (note that profit is not the nonprofit’s mission).
Of course they didn’t bring him back in the end, or resign, after all.
If the board had yielded to similarly minded replacements and brought back Sam, that isn’t the same as exonerating him, only realizing how badly they handled the firing. I can imagine that an independent investigation into the truth of the existing board’s allegations would still have been ordered by the new board, just as the new interim CEO actually did. If it was truly just a personality clash leading to mistrust, that would probably be the end of it. If there truly was malfeasance that makes Sam and unsuitable CEO, they’d probably then engage a PR firm to help make the case to the world far more persuasively than happened on Friday.
Yes, this is speculation, but I’ve been a nonprofit director and president myself, and if I were on that replacement board it’s what I’d do. In that case, the organization was much lower-profile than OpenAI, and we were spare-time volunteers with a tiny budget. The closest we came to self-dealing is when a long-time director wanted to become a paid software engineer contractor for us, but he left the board in order to make that ethically clear, and the remaining board approved the arrangement. Nothing hidden or dishonest there, and he’s continued to be a great help to the organization.
(Disclaimer: I stopped my own involvement with the org over 4 years ago myself, but that was truly because the rest of my life got too busy. There was no drama or anything around that.)
It is if the board had 9 members originally and only temporarily has 6 because of recent resignations. And that there is a proposal to expand the board on the table (because 6 is kind of thin for a company this size).
Ah, I didn’t know about the three vacancies. If the former board members in those slots all would have voted against these actions, then yes it probably meets my definition of a power play. But if any of them would have voted to take these actions, then the required majority may have been there even with the former full 9-member board, in which case it’s again not a power play.
Let’s assume for a second that it is a power play. If the point of it is just the power struggle between two factions seeking power then yeah it’s not a good thing to majorly disrupt an organization. But if the point of the power play is to rescue the nonprofit’s pursuit of its mission from a CEO’s misuse of power that goes against the mission, it’s a board acting exactly as it should, other than badly handling the communications around this mess.
I have no inside info and therefore am not expressing any opinion on what the truth is. But I’m not going to rush to believe the PR war being waged by Altman and his allies merely because the current board is bad at PR/comms.
I look forward to reading any public summary of the report from the investigation which the new interim CEO has ordered.
And the board just shrunk because of resignations and they wouldn't have had a majority in the past. It may have been one of those 'now or never' things.
pulling investment would be a hard power. Imagine if Microsoft says the change in leadership and idiotic board means the contract is done, no more compute for openAI and then goes on to back Sam Altmans new company
openai will be writing papers and asking for donations within a weeks time at that point as the rest of openAI quits
No one is going to leave Microsoft because of some amateur hour non profit board fuckery that had Microsoft stepping in as the adult. You don’t tell your partner investing billions of dollars of value you’re about to fire a CEO over perspective differences in a very public way, coloring it as malfeasance or dishonesty, and you think someone is ever going to take you seriously again?
Anyone with even a basic level of business sense isn’t going to hold Microsoft responsible in a negative light for prudent reactions to volatile partner behaviors. These are not just startup cloud credits being given to OpenAI.
> No one is going to leave Microsoft because of some amateur hour non profit board fuckery that had Microsoft stepping in as the adult.
But will they leave Microsoft (or, at least, be less inclined to rely on Microsoft in the future where competitors exsit) because of Microsoft terminating a relationship on which their access to a technology at the core of an enterprise service that enterprise customers rely on is based?
Microsoft will make the case that those customers should onboard to Microsoft offerings when at parity due to the unreliability of OpenAIs governance. And they won’t be wrong. Enterprise customers don’t want to hear about a critical vendor staging a board coup on Bloomberg, with a bunch of key employees quitting in solidarity, and then reading only a day or two later “on second thought, we were wrong, CEO is coming back.” This will make your vendor/third party risk team very twitchy. This will make executive leadership give the command down the chain to constantly explore alternatives.
OpenAI’s actions do not give people who approve tens or hundreds of millions of dollars in spend the warm fuzzy feeling. Microsoft knows exactly the consistency and stability these customers desire. They are the conduit by which value flows from OpenAI to Microsoft customers until Microsoft can deliver the value themselves.
(also why people get fed Teams vs Slack; because of who is making the purchasing decision, and why it’s being made)
> At the same time, companies that depend on OpenAI’s software were hastily looking at competing technologies, such as Meta Plaforms Inc.’s large language model, known as Llama. “As a startup, we are worried now. Do we continue with them or not?” said Amr Awadallah, the CEO of Vectara, which creates chatbots for corporate data.
> He said that the choice to continue with OpenAI or seek out a competitor would depend on reassurances from the company and Microsoft. “We need Microsoft to speak up and say everything is stable, we’ll continue to focus on our customers and partners,” Awadallah said. “We need to hear something like that to restore our confidence.”
Uh, you have a nonprofit board firing a CEO at a board meeting that doesn't even sound like was properly noticed. Was the board president even given time to attend?
And Microsoft has total rights to the models and weights, so they can CONTINUE their services and then spin up with Sam's new company.
*Uh, you have a nonprofit board firing a CEO at a board meeting that doesn't even sound like was properly noticed. Was the board president even given time to attend?*
I think it's reasonable to assume that even a controversial board checked with their lawyer and did what was legally required. Especially as nobody involved seems to be claiming otherwise.
If they had written consents from a majority of the board to remove Altman and Brockman from the board, then depending on the applicable nonprofit law and corporate governance documents, the board removals may very well have been legally conducted without need for a properly noticed board meeting. (For the actual firing of Altman, that might have been legal either through written consents or through a board meeting after the removals of Altman and Brockman.)
Having no information on what laws and governance documents apply to OpenAI or on what steps the board took, I express no opinion on whether the legal requirements were actually met, but it’s possible they were.
This. The OpenAI board as of now looks incompetent by sacking and then trying to rehire their most public figure in the span of a few days. Lacks determination, confidence and commitment
> pulling investment would be a hard power. Imagine if Microsoft says the change in leadership and idiotic board means the contract is done, no more compute for openAI and then goes on to back Sam Altmans new company
...losing their licenses to OpenAI's technology and thus the Azure OpenAI service offering for which they have enterprise customers who went with them because Microsoft is the secure, enterprise vendor whose reliability they have learned to count on.
Good way to make the "Nobody got fired for hiring Microsoft" that followed the same thing for IBM a thing of the past.
Yeah, with the right people, Sam's company might eventually give Microsoft a technically-adequate replacement technology, but Microsoft's enterprise position isn't founded on technical adequacy alone.
Unless sam Altman is actually GPT4 and is typing like mad at all times I don’t see how this impacts OpenAI in the least. There are plenty of suitors waiting for a chance to back OpenAI and forge such close partnerships. Sam is a talking head, backing his venture is backing vaporware until it’s not. OpenAI is here and now, and even if he churns senior leadership and line people, their advantage is so extreme at the present it’ll be a few years of disruption before anyone has caught up, it’s when that happens it’s more likely to be Claude than some new venture.
Yeah, MS leaving is the absolute end of OpenAI (and for all practical purposes the end of Ilya's career). It's Satya's call now, he's not happy and wants @sama back.
What a wild idea. You actually think of the most esteemed AI researchers will have trouble finding funding after this? Someone somewhere will give him money.
>You actually think of the most esteemed AI researchers will have trouble finding funding after this?
Plenty of those actually left already ...
Ilya is good but is one of many, and by many I mean there's 100s of equally capable researchers, many of those with more flexible morals. Note: I'm being generous to Ilya and taking him at face value on being the self-proclaimed AI messiah that is keeping us from the destruction of the world.
Thanks Ilya, but money is money and investors would definitely prefer to put their money in a for-profit than a non-profit. This is even more true after this whole fiasco.
The original Verge article says (with no given sources):
> missing a key 5PM PT deadline by which many OpenAI staffers were set to resign.
The tweet removes the qualifier:
> The staff at OpenAI set a 5PM deadline for the entire board to resign, or else they quit and join Sam in his new company.
And you seem to parrot that point even though it is well past that deadline and no news of mass resignations
It's all rumours yes, but I'd be inclined to believe that the majority of staff aligns with @sama, and not with Ilya and the board. If this wasn't true, Satya wouldn't be sitting right now in a room with them trying to put out the fire.
OpenAI will absolutely be able to raise money again, but it will likely never be on the same scale and will also likely have some serious safeguards in the contract language.
Whether you agree with investors agreeing with firing Sam or not, future investors will absolutely be nervous about sinking serious money in a company that split it's board without talking to key partners/investors first
>Why would I not believe that Meta or Google or anyone really come in or replace Microsoft?
Because they already do well on their own, Meta is doing exceptionally well actually.
It's better business for them if OpenAI just burns into the ground and leaves the cake up for grabs again. It doesn't take a lot of brain power to see that.
The only thing that has sent Google into "Code Red" in it's whole history has been OpenAI. They'd love to see it evaporate, and now they're not even spending a dime!
Google+ was an accessory thing. It was like "ye, we should also have a social network" and it didn't work and then what happened to them? Literally nothing, source: its stock.
OpenAI is a different beast, they (or some LLM) could displace Google as the main provider of information to the world. You just don't know what you're talking about, lol.
MS leaving would also be a disaster for subsequent development of Bing, Windows Copilot, Office Copilot, Teams AI, various strategically important Azure service like Semantic Search etc. They’ve gone all in on OpenAI LLM’s, support nothing else and have coupled all their AI to them.
> Microsoft has certain rights to OpenAI’s intellectual property so if their relationship were to break down, Microsoft would still be able to run OpenAI’s current models on its servers.
And in exchange for those rights, OpenAI has certain rights to compute credits. Microsoft doesn't get to break contract and keep what they got out of the deal while ceasing to supply what provided as part of the deal. That's called theft.
Satya got what he wanted, @sama joins MS to create pretty much a spin-off startup there, OpenAI on suicide watch with employees leaving, absolutely no new funding ever and a just-appointed CEO that wants to "pause" the company, lol.
I've seen people say this but why would Microsoft fund a new Altman startup instead of spending that money on developing their own AI owned by Microsoft?
Google is at best only partially funding Anthropic, Amazon committed "up to 4 billion" in funding. They have their own competing technology in development.
The moat isn’t very deep or wide here. It does exist, but most FAANG (including Microsoft) should be able to overcome it with the right investments. Microsoft is probably best positioned to leverage this tech with the integration they have been pursuing across most of their products. They have probably mage better use of “OpenAI” than any other company and are in the best position to replace them.
Or set off a new bidding war for both the researchers that quit and the pile of new startups... There's no rule that stays M$ would have to be the only bidder for the new venture(s).
Not at this magnitude. If you read the latest Semianalysis article, Microsoft’s current infra project is the largest infra project currently being undertaken by humankind (not sure if that’s overstatement, but regardless, $50B largely and competently directed towards (Open)AI-supporting hardware isn’t something easily replicable)
Apple doesn’t have anything close to the experience of running things “at scale” as Microsoft, Google or Amazon. Completely different levels. Apple would have challenges maintaining the update infrastructure for any of these other orgs. They simply do not exist on the same scale of compute power and management. I say this as an Apple apologist whose entire household consolidates on Apple devices and services. iCloud is simply nothing compared to the scale and capabilities of OneDrive and related services. I don’t think cash alone is enough. Proven ability to execute at scale matters.
"Bret Taylor, the former co-CEO of Salesforce Inc., will be on the new board, several people said. Another possible addition is an executive from Redmond, Washington-based Microsoft, OpenAI’s largest shareholder — but Microsoft hasn’t decided whether it wants board representation, some people said."
hah, Microsoft will be in control from here on out whether they have someone technically on the board or not. They did the embrace and extend now we're on to extinguish.
A week ago I was saying that it's likely the leading AI company in 5 years time hasn't been founded yet.
After the news Friday and looking more at how Ilya sees the future of neural networks, I actually thought there's a decent chance OpenAI might correct course to continue to lead.
If it becomes too productized under a strengthened Altman, it's back on the list of companies building into their own obsolescence.
The right way to the future is adapting alignment to the increasing complexity of the model. It's not just about 'safety' but about performance and avoiding Goodhart's Law.
The way all major players, OpenAI included, are handling that step is by carrying forward the techniques that were appropriate for less complex models. Which is a huge step back from the approach reflected very early on pre-release for GPT-4's chat model. An approach that seemed to reflect Ilya's vision.
As long as OpenAI keeps fine tuning to try to meet the low hanging fruit product demand they've created and screwing up their pretrained model on measures that haven't become the industry target, they aren't going to be competitive against yet to exist companies that don't take such naive approaches with the fine tuning step. Right now they have an advantage in Ilya being ahead of the trend, but if Altman returning is at the cost of Ilya's influence, they are going to continue to dig their long term grave in the pursuit of short term success.
He was also in upper level in FB and had a work in Google. I expect him to be part in Amazon and Apple in this decade. He touchhes every big company, lol
Oh man, Bret knows how’s to use the power of the board. He successfully twisted Elon’s arm into buying twitter. If the board is choosing him, sounds like they are going all oracle on microsoft / google again , Java style.
If Sam beats Ilya + Bret, I will be even more impressed than I already am.
Can’t wait for Matt Levine’s play by play if they hire the same legal team Bret used in the last days of twitter.
Or it’ll be over in 2 hrs and Sam will win now. Let’s see.
Hard to see how embrace, extend, extinguish fits here. There's no standards-based app ecosystem that depends on some community standard for interplay that they can break by getting market-wide adoption of their proprietary standard. ChatGPT's IO is natural language text prompts/answers. Can MS really create a proprietary extension to natural language, that the world recognizes?
It seems that some people embraced the concept of "embrace, extend, extinguish", and now they are extending it too far. Eventually the usage of the phrase will be so diluted that it will become extinguished.
I don't have enough info to take a position on the current situation there, but I think that's a brilliant selfie. (Pained reaction to having to wear an OpenAI guest ID.)
In Sam’s shoes, as slippery an operator as I’d be, I’d ask a sympathetic employee to register me as a guest just to make that post, whether there were negotiations or not.
Is it so hard to find good people to put on the board? Sam was the CEO OF Y COMBINATOR. Shouldn't he know who is best to put on the board? Find them out?
Apparently not?
Please say they are not going to put a board just as bad as before.
There are no checks and balances. Should Open AI employees be allowed to veto a board decision vote if they have 50% or 67% of the vote? Should Open AI employees be allowed to vote for at least some members to be allowed on the board? Like the Senate voting to confirm a Supreme Court Justice?
No matter how good the next board will be, the power rules still apply as before, and the same thing could happen again if no other changes are put in place...
I think he’s lucky that apparently many OpenAI employees support him, as otherwise there would be less leverage over the board. Investors can threaten the board, but where would they go if the brainpower stays with OpenAI?
In a city where a normal person cannot buy a house and employer wants 25% office time? Give me a break, they just want to live like people could in 1950s.
That number is 91 employees out OAI's 700. Seems like a pretty reasonable number of people to support Sam's faction, but absolutely recoverable for OAI even if every single heart leaves.
91 declared out of 700. Which is actually surprisingly high and which, given that these people are actively speaking out against their employer indicates that that faction is probably a lot larger than those 91, but those may have good reasons to play it safe for now rather than to risk backing the losing side.
Props to you. No, but mysteriously they are somehow still clinging to their seats. It makes you wonder what the hell is going on, and not a peep from the 'new CEO' either besides his initial vapid tweet.
What's there to be passionate about in a doomed business? Where is OpenAI going to get compute from? What work are they going to get done if everybody else follows Altman?
they've been on a hiring spree, so majority of employees probably joined within the last 1-2 years and more about making sure they get their equity(or profit units) get cashed out than the original mission of OpenAI. I doubt all the enterprise sales reps they've brought in care about AI alignment or making sure the profit from AGI helps humanity
If he wins in this, it’ll mean he’d maneuvered himself into a position of controlling the whole thing—in fact, if not on paper—some time back. The “coup” people’ve been writing about will have been his actions over the last year or so, not the board’s.
Big money on the line. Insane, life-changing payouts in the cards. Altman and MS on the side of those, the board on the side of the mission. Money’s likely to win.
If the board has the support of all the staff the board will be absolutely fine.
That said, not clear to me that board is supported by the staff.
So if Sam goes, and many of the key staff go... will be interesting.
And the boards style in all this, if that is the "mission" - is wild. You have partners, staff, businesses - doing a VC round and you blow it all up without it sounds like even talking to your board president? Normally this type of ouster requires a properly noticed meeting!
I think some of the staff definitely supports Altman, and are down to quit and follow him. Personally I hope that's what happens. Separate the altruistic-mission guys from the big-profit guys and let them go to town in their respective spheres.
The sad part is that nearly 100% of investment and resources will follow the big-profit group leaving the altruistic-mission guys in the dust with no resources and no money.
Plot twist: Iliya, Altman and the orher guy fired planned all this to make the board look dumb, remove them, and take full control without being held back any longer.
I don't think so at all -- the board was simply far dumber than we thought.
The board didn't plan or think this through at all.
This isn't about Sam being powerful, just about him being a reasonable predictable cofounder Microsoft can work with. It's the rest of the board that shocked Microsoft with how unprofessional their actions were, that they can't be worked with.
The very idea that they would fire Sam without even consulting with Microsoft first is such a gigantic red flag in judgment.
If the issue is they believe Sam's focus on commercialization is inherently against their charter, Microsoft is key to that - they are the shining example of this shift. Consulting with them would be antithetical to solving the problem.
For a for-profit, the pragmatic approach due to Microsoft also being the majority computer provider (We can set aside the investments for the moment - most are in the form of compute credits and come in tranches. OpenAI is not sitting on $10B in cash in their bank accounts or whatever) would make a lot of sense.
But they're a non-profit that operate in accordance to their pipe-dream charter. You and I might be skeptical of it or just think it's generally dumb, but non-profits are allowed to believe in pipe-dreams and pursue them.
At the very least they still issued a poorly worded statement and have not been able to recover from that, but it is quite possible that their attitude towards the investors in the for-profit is entirely consistent with the charter they are supposed to be following.
Exactly. The board should have been stocked with seasoned professionals not with people who on an idle Friday decide to throw their weight around to see what that feels like without any idea of the possible ramifications of such acts.
The key question in my mind is not who is going to be on the new board, but whether Ilya Sutskever will stay if Altman comes back. I worry that OpenAI without Ilya is not going to produce groundbreaking innovations at the same pace. Hopefully Sam Altman and Ilya Sutskever can patch things up. That's more important than who they add or remove to the board.
If Ilya’s problem with Sam was that he was acting out on his own and deviating from founding principles, he’s not going to have a good time under Elon.
From everything I can piece together the underlying problem is that Sam defunded Ilya's alignment research because they only have so much compute and they needed all of it to keep up with the demand for ChatGPT and the APIs, especially after dev day.
Ilya losing access to the GPUs he needs to do his research so that the company can service a few more customers seemed like a fundamental betrayal to him and a sign that Sam was ignoring safety in order to grow marketshare.
If Elon is able to promise him the resources he needs to do his research then I think it could work out.
More likely that the WorldCoin project is doing poorly. With crypto and NFTs going down it could be a house of cards. And in a need of an urgent injection of money.
despite being invented for non-scam purposes, it turns out to be a productivity multipler for scammers more than anyone else.
and also Bitcoin might be the exception that proves the rule - every other chain or token is managed by a few insiders taking get-rich-quick marks for a ride.
I think it'd be really funny if Bitcoin was originally supposed to also be a rug pull but Satoshi died suddenly or something, and so Bitcoin was just the scam that never managed to complete itself or something.
> If Elon is able to promise him the resources he needs to do his research then I think it could work out.
Who on earth would ever trust an Elon promise at this point? The guy literally can’t open his mouth without making a promise he can’t keep.
Unless Ilya is getting something in a bulletproof contract and is willing to spend a decade fighting for it in court, he’s an idiot doing anything with Elon.
Musk left in 2018 after disagreements over the future of the company. In 2019 OpenAI started their 'OpenAI LP' for-profit entity. It seems entirely reasonable that the profit vs mission motive that seems to be driving this issue, is also what drove Musk to leave.
Come on now, does it sound like Musk to you to leave due to the prospect of profit? Surely there was some kind of power struggle there that he couldn't win, and the mission thing was a good story to tell others.
The problem is Elon's approach to alignment as presented during x.ai launch is pretty different to what Ilya says and as far as I can tell pretty naive on top of that.
I'm starting to suspect this was all orchestrated by Google. Win-win. Google has the hardware, the data, and the models. Google only lacks OpenAI's secret refining sauce. Getting back Ilya would be the best outcome for them.
I apologize in advance for this snarky comment, but with Elon’s track record when it comes to safety at Twitter and Tesla, I would doubt his sincerity or follow-through on AI safety.
I suppose "safety" means different things to different people. Elon seems to be of the type that cares about existential risks. One reading of him, is that he sees both Tesla, Twitter and SpaceX as tools to mitigate what he sees as existential risks.
In the case of Tesla, to accelerate the development of electric cars, in the case of Twitter, to reduce the probability of civil war and in the case of SpaceX to eventually have humanity (or our descendants) spread out enough that a single catastrophic event (like a meteor, gray goo or similar) doesn't wipe us out all at once.
His detractors obviously will question both his motives and methods, but if we imagine he's acting out of good faith (whether or not he's wrong), his approach to AI fits the pattern, including his story about why he helped with the startup of OpenAI in the first place.
From someone with an ex-risk approach to AI safety, the first concern is, to quote Ilya from the recent Alignment Workshop "As a bare minimum, let's make it so that if the tech does 'bad things', it's because of its operators, rather than due to some unexpected behavior".
In other words, for someone concerned with existential risk, even intentional "bad use" such as using AI for killer robots at a large scale in war or for a dictator to use AI to suppress a population are secondary concerns.
And it appears to me that Elon and Ilya both have this outlook, while Sam may be more concerned with shorter term social impacts.
I don’t know if Ilya contributed anything technically to OpenAI in the recent 2-3 years. He is broadly thanked along with Sam in GPT4 List of contributors, he is not mentioned in ChatGPT list of contributors, he is again thanked for advice in the GPT-3 list of contributors. The folks who resigned upon Sams ouster, Jacub Pachocki is credited as lead of GPT-4 , Greg Brockmann is credited for multiple things in GPT-4 ( he was the lead for training infra setup).
All of them would have left if Sam left, if anything letting Sam go would significantly hamstring OpenAI than letting Ilya go.
Currently, it’s very unclear who operates under what motives. How much is it about ego? How much is it about money and how much is it due to intellectual positions? Maybe there’re are no heroes and maybe there’re no antiheroes? With the recent news about other investments and deals, the facade doesn’t seem to even resemble the OpenAI’s reality.
I can’t wait to read the autobiography of involved parties.
I had ChatGPT give me some proposals for screenplays.
My favorite was Rainbow MosAIc, a Rashomon style film taking place mainly from Friday to Monday. It played with all the different potential motivations and theories. It did a half decent metaphor with representing the different points of view via the different video conferencing cameras.
I can absolutely empathize with Ilya here, though. As far as I know the tech making openai function is largely his life’s work. It would be extremely frustrating to have Sam be the face of it, and be given the credit for it.
Sam is clearly a very accomplished businessman and networker. Those people are super important, I wish I had a person like him on my team.
I’ve had the experience of other people tacitly taking credit for my work. Giving talks about it, receiving praise for their vision. It’s incredibly demoralizing.
I’m not necessarily saying Sam did this, since I don’t know any of these people. Just speculating on how it might feel to ge Ilya watching Sam go on a world tour meeting heads of state to talk about what is largely Ilya’s work.
What makes you think it is 'his' work and not theirs? I remember when OpenAI was just a joke compared to Deepmind. The turning point (as I remember) was when they used [1] deep reinforcement learning on dota2. clearly iyla (also one of the authors) contributed, but so did many others on the team I assume?
Ideas are like children. You don't just need to give birth to them; you also need to raise them, teach them, challenge them, and show them the world.
Giving birth to an idea is a necessary condition and sets the boundaries for so much of what it can achieve. But if you're unable to raise it to become a world champion, it isn't worth anything.
I've been on the raising ideas side way more in my 20+ career in tech. I know some people became bitter and scornful of me because I pushed their ideas to become something big and received a lot of credit for that. And I try to give credit where credit is due. But often enough, when I try to share the spotlight (in front of a customer or when presenting at BoD, for example), the brilliant engineer withers under pressure or actively harms his idea by pointing out its flaws excessively. It's a delicate balance.
This isn’t a given and not everyone’s view. Doing a thing and choosing what to do with said thing is that person’s prerogative. The specifics will matter but I don’t agree that someone else’s idea is something someone else must push and profit of if they don’t. The idea of patents also agree with this too.
Patents are a compromise: you keep your prerogative, yes, but for a limited amount of time and you agree to publicly publish it so that everyone can access it. Eventually, if you do nothing with it, why would we limit humanity from benefitting from it ?
It's like imagine a guy has a nice idea to cure cancer, but plays the princess with it and refuses to industrialize it, while people are dying left and right. Surely, it becomes indefensible, and at some point, someone brave will do the right thing and implement the idea. You have a right to reap the benefit of your ideas but you have a duty not to deprive humanity of any benefit just because you thought of it first, I feel ?
I think Sam has been given credit for being a good CEO and leader, which clearly is deserved. I've never heard him take credit for technical accomplishments. Ilya has been doing plenty of talks, podcasts, etc.--if anyone's the technical face of OpenAI, it's him. There's no lack of praise or credit given to him.
"Just speculating on how it might feel to Ilya watching Sam go on a world tour meeting heads of state to talk about what is largely Ilya’s work."
The whole point of a CEO is to do this kind of stuff. If your best engineers are going on world tours, talking to politicians, and preparing for keynotes, that's a pretty terrible use of their time. Not to mention that most of them would hate doing it.
I'm a developer and have used Open AI as a beta user from before their public launch and been interested in the structure and business side of AI and had never heard of Ilya until this recent blowup. I'm just one data point, but my guess is that the vast, vast majority of the public that knows anything about AI has also never heard of Ilya.
Yes, you are only one data point. Check the views on Ilya's interviews on youtube. E.g. his interview on Lex (which he did years before Sam Altman) has 400k views, which demonstrated that he is a very well known entity in tech/AI space.
Ilya's podcast was over 3 years ago and Lex's average IT podcasts had 50-100k views. Ilya got 400k. For reference, the absolute legend Jim Keller got 600k at the same time.
So yeah, Ilya is a very known entity. No, ordinary folks don't need to know him, but if you are in IT and especially if you have anything to do with AI, then not knowing about Ilya tells more about your informational bubble than about Ilya's alleged lack of recognition.
It is akin to claiming to be into crypto on development side and not knowing the name of Vitalik Buterin.
Obviously Ilya will not be as famous as Sam since Sam is doing world tours and talking to who's who of world politics. But Ilya, Karpathy, gdb all are well respected and know in dev circles.
Even the recent OpenAI profile in one of prominent publications covered Mira, Ilya and gdb in addition to Sam.
But the fundamental question is why would a researcher expect (if they do) that they will be as well known as the CEO who is the face of organisation?
I was under the impression that the transformer is the tech making openai function, and that Ilya's name is not on the 2017 paper introducing the idea.
A LOT of people have put a ton of energy in to OpenAI, and a lot have put A LOT of money into it. If it was as petty as credit, then screw them all as they just don’t get it. It’s all on the shoulders of others too….
Ilya doesn’t want to be known as the Steve Wozniak in this relationship while Sam is perceived as the Steve Jobs. Unless you’re technically inclined no one remembers or praises the contributions of the Woz.
Given that nothing criminal happened, canning Sam with no chance for discussion was just overkill.
It's probably more of an intellectual / philosophical position, given that they just did not think through the real impact on the business (and thus the mission itself)
I'm inclined to assume that something stupid was done. It happens. They should resolve it, fix the rules for how the board can behave, and move on.
Despite the bungling, Ilya is probably still a good voice to have on the board. His key responsibility (super-alignment), is a key part of OpenAI's mission.
While we don’t know the whole story, I don’t think Sam is innocent in this matter. It seems likely that this was a recurring disagreement, and perhaps this was simply a step too far where the board had to act. When you fire somebody, typically you don’t give them a heads up.
Worse. Which is exactly why superintelligence is scary - it'll make the humans around it go wild for power, and then it will be impossible (by definition) to predict.
Huh. I imagined many scenarios, including the more obvious and dangerous one, "AI manipulating people unaware of its existence" - but I never considered a scenario in which the AI makes its existence widely known, and perhaps presents itself as more dangerous than it is, and then it just starts slightly nudging all the people racing to take control over it.
Both are too complicated. All a true AI has to do to get control of everything is promise 10% annual returns and guaranteed victory in battle. Limited-time offer, sign up today.
Done.
Any actual AI takeover will be boring and largely voluntary. For certain definitions of voluntary.
That's the playbook of any dictator. Hitch your horse to my wagon and we'll go places. But stray from the wagon and I'll have you shot by someone who is loyal to me. And it works. Without their henchmen little creeps wouldn't get out of the gate because they are invariably complete cowards.
Initially I thought it iwas about money. Now it seems to be about intellectual position: Sama wants to move fast and break things, Ilya does not. I don’t want my bank to replace customer support with LLM agent, that has access to internal APIs or LLM driving a medical decision just yet.
As an example, couple years ago Crisis Text Line decided to sell data to a for profit spin off. Their justification was that data was anonymized, which was bs for it’s unstructured text data, and that it’s not against terms of service, which users had agreed to. Mind you, these users were people in crisis maybe even on a brink of a suicide. This was highly unethical and caused a backlash. Then one of the bod members wrote a half assed “reflection” post [1]. If some core employees of CTL did a “coup” to stop this decision, because they believed it’s unethical and dangerous, wouldn’t it be justifies?
>I don’t want my bank to replace customer support with LLM agent, that has access to internal APIs
If it weren't for the mentality you are rallying against we wouldn't have ChatGPT. Google, Meta, everyone had these LLMs sitting around. OpenAI was the only company with the balls to release it to the public.
The other question is what happens to the pretense of "safety". The CEO explicitly said multiple times that he does not have unilateral control, that he is subordinate to the board and that the board's job was to remove him if it he was pursuing an unsafe course. Assuming he gets re-instated that would all be shown to be false.
Yeah, a $2.8Tn tech company and a venture capitalist's network managing to overturn the non-profit Board's completely legal action (legally required of the Board, even) they took in furtherance of their Charter would be the ultimate practical demonstration that no legal structure, not even those specifically designed to do so, is above what capital wants. It's their world, we're just living in it, and they want to make sure they make the next world subservient to them too.
Just because something is legal does not mean it is right. If they’d spent a week talking with him and ultimately couldn’t resolve their differences and then fired him that would be one thing.
This has been going on for months. Sam sidelined Ilya out of his job a month ago, after a long time of Ilya trying to convince him he was going down the wrong course.
You have evidence Sam lied, like their statement implies? How do you know the action was in furtherance of their charter instead of over petty grievances?
I think Sam could "potentially" be the bigger man in this.
Ilya may still be someone who should be on the board... Especially given his role as head of alignment research. He deserves a say on key issues related to OpenAI.
People get excited. Stupid things happen. Especially in startups.
ChatGPT having become so successful doesn't change the fact that the company as a whole, is fairly immature still.
They should seriously just laugh about it and move on.
Let's just say that Ilya had a bad couple of days, and probably needs a couple of weeks of vacation.
I agree. As long as Sam has total control over the board it's fine if Iiya doesn't approve of everything Sam does. Not worth losing a top AI researcher over this.
That defeats the purpose of having a board. Which is sometimes the desired outcome, of course, but if I were Microsoft I am not sure I’d want Sam in there with zero checks and balances.
I also wouldn’t want Ilya in there without checks and balances, to be clear. So the challenge is identifying the right adults.
I don’t think it’s realistic to expect that negotiation to complete successfully in the eyes of all parties by 5 PM today. It’s possible that Ilya will give up on having his requirements satisfied and leave.
If Sam returns he doesn't want to risk getting ousted again? So presumably he'll appoint people 100% loyal to him.
Plenty of precedent for tech founders to have total board control. It will take a little while for Sam to consolidate power, but he won't forget what happened this weekend and he'll play the long game accordingly.
Sam isn’t the only person who has to agree to the members of the new board. Sam also has not displayed a mastery of board politics in the past, either at OpenAI or at Y Combinator. Of course he has a strong hand to play here, but then again so does Microsoft.
Is it assumed that Sam would be more aligned with Microsoft than not? From a vague awareness only, I'd guess that Microsoft wants to wield OpenAI against rivals, and SamA wouldn't find it unappealing to be bigger than Zuckerberg?
So you think Ilya will stay while getting none of what he wanted because he still wants to have a job at openai? Do you think he did it with no real grievances?
He has succeeded in forcing a negotiation on the issues he's unhappy about. If he felt his concerns were being ignored previously, well, that is certainly no longer the case. I wouldn't assume necessarily that he's not going to get any of what he wanted.
It seems somewhat clear that at the end of the day there are two camps, Ilya and Sam.
Sam is backed by investors who are looking for returns, and are not sure if Ilya will get them the same juicy 100X.
So, if Sam comes back, then I’m pretty sure Ilya will go on his own. Whether he will focus on GPT or AGI or ?, is anyone’s guess, as is how many from OpenAI will follow him as everyone loves money.
EDIT: Ilya should have no trouble finding benefactors of his own, whether they are one of the FAANGs or VCs is TBD.
Why wouldn't I trust someone who took action to uphold their organizational charter, knowing there would be intense pressure to do otherwise?
Investment is only partially about trust. I agree Sam's a pretty investable guy. I expect Sam to pursue growth through fundraising, product commercialization, corporate partnerships, etc in exactly the YC mode. He's also clearly ok with letting the momentum of that growth overwhelm the original stated aims of OpenAI, especially given what the original firing press release said about Sam not being entirely forthright. I suspect Microsoft made their investment knowing that something like this might happen. It's not trustworthy that he tried to overwhelm nonprofit aims under for-profit momentum, but if you're an investor do you care?
I doubt it is fixable. Firing aside (which will be hard, but not impossible to make peace on), they have fundamental differences in goals. Even if you force a shotgun marriage to save the company, I would bet this will be a short term reprieve. Both goals can be successfully pursued, just not at the same company. My 2c.
> Ilya staying is the only thing that is basically guaranteed.
How is that guaranteed? If investors remove him from board of directors, he may get pissed off and quit, no?
> In addition, no one is irreplaceable.
In theory, maybe. In practice, it is not always easy. Nearly a year after ChatGPT came out Google hasn't been able to catch up. If it was easy to replace Ilya after he left Google, they would have caught up by now.
You know, I feel like there's some territory there for a series of comedy-documentaries (think Drunk History) about various tech companies, with the same cast in every episode.
This is why, when you claim to be running a non-profit to "benefit humankind," you shouldn't put all your resources into a for-profit subsidiary. Eventually, the for-profit arm, and its investors, will find its nonprofit parent a hindrance, and an insular board of directors won't stand a chance against corporate titans.
This was pretty clearly an attempt by the board to reassert control, which was slowly slipping away as the company became more enmeshed with Microsoft.
It would be wise to view any investment in OpenAI Global, LLC in the spirit of a donation.
The Company exists to advance OpenAl, Inc.'s mission of ensuring that safe artificial general intelligence is developed and benefits all of humanity. The Company's duty to this mission and the principles advanced in the OpenAI, Inc. Charter take precedence over any obligation to generate a profit. The Company may never make a profit, and the Company is under no obligation to do so. The Company is free to re-invest any or all of the Company's cash flow into research and development activities and/or related expenses without any obligation to the Members.
I guess safe artificial general intelligence is developed and benefits all of humanity, means an open AI (hence the name) and safe AI.
no. it's anti-openness.
the true value in ai/agi is the ability to control the output. the "safe" part of this is controlling the political slant that "open" ai models allow. the technology itself has much less value than the control that is possible to those who decide what is "safe" and what isn't. it's akin to raiding the libraries and removing any book or idea or reference to historical event that isn't culturally popular.
We're still waiting for the explanations from Altman about the alleged involvement in spending time on conflicting companies while he is CEO at OpenAI.
According to FT this could be the cause for the firing:
“Sam has a company called Oklo, and [was trying to launch] a device company and a chip company (for AI). The rank and file at OpenAI don’t dispute those are important. The dispute is that OpenAI doesn’t own a piece. If he’s making a ton of money from companies around OpenAI there are potential conflicts of interest.”
I don’t see how that factors in. What matters is OpenAI’s enterprise customers reading about a boardroom coup in the WSJ. Completely avoidable destruction of value.
I think what people in this thread and others are trying to say is that to run a organization like OpenAI you need lots and lots funding. AI research is incredibly costly due to highly paid researchers and an ungodly amount of GPU resources. To put all current funding at risk by pissing off current investors and enterprise customers puts the whole mission of the organization at risk. That's where the perceived incompetence comes from no mater how good the intentions are.
I understand that. What is missing is the purpose of running such an organisation. OpenAI has achieved a lot, but is it going to the direction and towards the purpose it was founded on? I do not see how one can argue that. For a non-profit, creating value is a means to a goal, not a goal in itself (as opposed to a for-profit org). People thinking that the problem of this move is that it destroys value for openAI showcase the real issue perfectly.
Some would say it is the opposite way around. Mission of openAI was not supposed to be maximising profit/value. Especially if it can be argued that this exactly goes against its original purpose.
Isn’t it amazing how companies worry about lowly, ordinary employees moonlighting, but C-suiters and board members being involved in several ventures is totally normal?
It is hard to negotiate when the investors and for-profit part basically has much more power. They tried to bring them in front of a fait accompli situation, as this was their only chance, but they seem to have failed. I do not think they had a better move in the current situation right now, sadly.
You do not fire a CEO because you hold some personal grudges towards them. You fire them because they do something wrong. And I do not see any evidence or indication of smearing Altman, unless they lie about (ie I do not see any indication of them lying about it).
Then, he progressively sold more and more of the companies future to Ms.
You don’t need chatgpt and it’s massive gpu consumption to achieve the goals of openai. A small research team and a few million, this company becomes a quaint quiet overachiever.
The company started to hockey stick and everyone did what they knew, Sam got the investment and money. The tech team hunkered down and delivered gpt-4 and soon -5
Was there a different path? Maybe.
Was there a path that didn’t lead to selling the company for “laundry buddy”, maybe also.
On the other hand, Ms knew what they were getting into when its hundredth lawyer signed off on the investment. To now turn around as surprised pikachu when the board starts to do its job and their man on the ground gets the boot is laughable.
You're arguing their most viable path was to fire him, wreak havoc and immediately seek to rehire and further empower him whilst diminishing themselves in the process? It's so convoluted, it just might work!
Whether fulfilling their mission or succumbing to palace intrigue, it was a gamble they took. If they didn't realize it was a gamble, then they didn't think hard enough first. If they did realize the risks, but thought they must, then they didn't explore their options sufficiently. They thought their hand was unbeatable. They never even opened the playbook.
Oh, then my apologies, it's unclear to me what you're arguing; That the disaster they find themselves in wasn't foreseeable?
That would imply they couldn’t have considered that Altman was beloved by vital and devoted employees? That big investors would be livid and take action? That the world would be shocked by a successful CEO being unceremoniously sacked during unprecedented success, with (unsubstantiated) allegations of wrongdoing, and leap on the story. Generally those are the kinds of things that would have come up on a "Fire Sam Pro and Cons" list. Or any kind "what's the best way to get what we want and avoid disaster" planning session. They made the way it was done the story, and if they had a good reason, it's been obscured and undermined by attempting to reinstate him.
If Microsoft had to put out a statement "its all good we got the source code" clearly the openness of OpenAI was lost a while ago. This move of the board was presumably primarily good for the board.
>Bigger concern would be the construction of a bomb, which, still, takes a lot of hard to hide resources.
The average postgraduate in physics can design a nuclear bomb. That ship sailed in the 1960s. Anyone who uses that as an argument wants a censorship regime that the medieval catholic church would find excessive.
To be fair it is a very subjective term, god-like. You could make a claim for many different technical advancements to represent god-like capabilities. I'd claim that many examples exist to day, but many of them are not readily available to most people for inherent or regulatory reasons.
Now, I feel even just "OK" agential AI would represent god-like abilities. Being able to spawn digital homunculi that do your bidding for relatively cheap and with limited knowledge and skill required on the part of the conjuror.
Again, this is very subjective. You might feel that god-like means an entity that can build Dyson Spheres and bend reality to it's will. That is certainly god-like, but just a much higher threshold than what I'd use.
"The board" isn't exactly a single entity. Even if the current board made this decision unanimously, they were a minority at the beginning of the year.
I'm not trying to throw undeserved shade, but why do we think this is something as complex as that and not just plain incompetence? Especially given the cloak and daggers firing without consulting or notifying any of their partners beforehand. That's just immaturity.
The most logical outcome would be for Microsoft to buy the for-profit OpenAI entity off its non-profit parent for $50B or some other exorbitant sum. They have the money, this would give the non-profit researchers enough play money that they can keep chasing AGI indefinitely, all the employees who joined the for-profit entity chasing a big exit could see their payday, and the new corporate parent could do what they want with the tech, including deeply integrate it within their systems without fear of competing usages.
Extra points if Google were to sweep in and buy OpenAI. I think Sundar is probably too sleepy to manage it, but this would be a coup of epic proportions. They could replace their own lackluster GenAI efforts, lock out Microsoft and Bing from ChatGPT (or if contractually unable to, enshittify the product until nobody cares), and ensure their continued AI dominance. The time to do it is now, when the OpenAI board is down to 4 people, the current leader of whom has prior Google ties, and their interest is to play with AI as an academic curiosity, which a fat warchest would accomplish. Plus if the current board wants to slow down AI progress, one sure way to accomplish that would be to sell it to Google.
The new investors entered at a ~90B USD valuation for info.
Microsoft I don't think they need it:
Assuming they have the whole 90B USD to spend: it doesn't really make sense;
they have full access to the source-code of OpenAI and datasets (because the whole training and runtime runs on their servers already).
They could poach employees and make them better offers, and get away with a much more efficient cost-basis, + increase employee retention (whereas OpenAI employees may just become so rich after a buy-out that they could be tempted to leave).
They can replicate the tech internally without any doubt and without OpenAI.
Google is in deep trouble for now, perhaps they will recover with Gemini. In theory they could buy OpenAI but it seems out-of-character for them. They have strong internal political conflicts within Google, and technically it would be a nightmare to merge the infrastructure+code within their /google3 codebase and other Google-only dependencies soup.
It wasn't sufficient for Google+ or Farmville either, but both Google and Meta have extremely competitive LLMs. If Microsoft commit themselves (which is a big if), they could have a competitive AI research lab. They're a cloud company now though, so it makes sense that they'd align themselves with the most service-oriented business of the lot.
GPT-4 is a magnitude larger and not a magnitude better. Even before that, GPT-3 was not a particularly high watermark (compared to T5 and BERT) and GPT-2 was famously so expensive to run that it ran up a 6-figure monthly cloud spend just for inferencing. Lord knows what GPT-4 costs at-scale, but I'm not convinced it's cost-competitive with the alternatives.
GPT-4 is an existential threat to Google. Since March 24 of this year, 80% of the time I ask GPT-4 questions I would google before. And Google knows this. They are throwing billions at it but simply cannot catch up.
Beating OpenAI in a money-pissing competition is not their priority. I don't use Google or harbor much love for them, but the existence of AI does not detract from the value of advertising. If anything, it funnels more people into it as they're looking to monetize that which is unprofitable. ChatGPT is not YouTube; it doesn't print money.
Feel however you will about it, but people have been rattling this pan for decades now. Google's bottom line will exist until someone finds a better way to extract marginal revenue than advertising.
For the sake of your wallet, I hope you don't put money on that. Google certainly spends an order of magnitude more than OpenAI because they have been around longer than them, ship their own hardware and maintain their own inferencing library. The amount they spend on training their LLMs is the minority, full-stop.
I despise both of these companies, but Google's advantage here is so blatantly obvious that I struggle to see how you can even defend OpenAI like this.
Exactly. Google has so much more resources, tries so hard to compete (it's literally life or death for them), and yet it's still so far behind. It's strange that you don't see that - if you haven't tried comparing Bard's output to GPT-4 for the same questions - try it, it will become obvious.
It's quite possible their rumored Gemini model might finally catch up with GPT-4 at some point in the future - probably around the time GPT-5 is released.
From a users POV, GPT-4 with search might be, but not alone. There's still a need for live results and citing specific documents. Search doesn't have to mean Google, but it can mean Google.
From an indexing/crawling POV, the content generated by LLMs might (and IMO will) permanently defeat spam filters, which would in turn cause Google (and everyone else) to permanently lose the war against spam SEO. That might be an existential threat to the value of the web in general, even as an input (for training and for web search) for LLMs.
LLMs might already be good enough to degrade the benefit of freedom of speech via signal-to-noise ratio (even if you think LLMs are "just convincing BS generators"), so I'm glad the propaganda potential is one of the things the red team were working on before the initial release.
I think they'd only be able to improve the SNR if they know how to separate fact from fiction. While I would love to believe they can do that in 1-2 years, I don't see any happy path for that.
There are many versions of GPT-4 model that appeared after the first one. My point is that Google and others still cannot match the quality of the first one, more than a year after it was trained.
I personally believe these are marketing failures rather than technical failures.
I also personally loathe Microsoft, but even I will concede that they probably have the technical wherewithal to follow known trajectories, the cat is out of the bag with AI now.
The reason for a buy-out is to make this all legally "clean".
Sure, Microsoft has physical access to the source code and model weights because it's trained on their servers. That doesn't mean they can just take it. If you've ever worked at a big cloud provider or enterprise software system, you'll know that there's a big legal firewall around customer data that is stored within the company's systems, and you can't look at it or touch it without the customer's consent, and even then only for specific business purposes.
Same goes for the board. Legally, the non-profit board is in charge of the for-profit OpenAI entity, and Microsoft does not get a vote. If they want the board gone but the board does not want to step down, too bad. They have the option of poaching all the talent and trying to re-create the models - but they have to do this employee-by-employee, they can't take any confidential OpenAI data or code, etc. Microsoft may have OpenAI by the balls economically, but OpenAI has Microsoft by the balls legally.
A buyout solves both of these problems. It's an exchange of economic value (which Microsoft has in spades) for legal control (which the OpenAI board currently has). Straightens out all the misaligned incentives and lets both parties get what they really want, which is the point of transactions in the first place.
Yeah, but also remember that Altman and Musk started the non-profit to begin with (back when both their reputations were much different). They were explicitly concerned about Google's dominance in AI. It was always competitive, and always about power.
Wikipedia gives these names:
In December 2015, Sam Altman, Greg Brockman, Reid Hoffman, Jessica Livingston, Peter Thiel, Elon Musk, Amazon Web Services (AWS), Infosys, and YC Research announced[15] the formation of OpenAI and pledged over $1 billion to the venture
Do any of those people sound like their day job was running non-profits? Had any of them EVER worked at a non-profit?
---
So a pretty straightforward reading is that the business/profit-minded guys started the non-profit to lure the idealistic researchers in.
The non-profit thing was a feel-good ruse, a recruiting tool. Sutskever could have had any job he wanted at that point, after his breakthroughs in the field. He also didn't have to work, after his 3-person company was acquired by Google for $40M+.
I'm sure it's more nuanced than that, but it's silly to say that there was an idealistic and pure non-profit, and some business guys came in and ruined it. The motive was there all along.
Not to say I wouldn't have been fooled (I mean certainly employees got many benefits, which made it worth their time). But in retrospect it's naive to accept their help with funding and connections (e.g. OpenAI's first office was Stripe's office), and not think they wouldn't get paid back later.
VCs are very good at understanding the long game. Peter Thiel knows that most of the profits come after 10-15 years.
Altman can take no equity in OpenAI, because he's playing the long game. He knows it's just "physics" that he will get paid back later (and that seems to have already happened)
---
Anybody who's worked at a startup that became a successful company has seen this split. The early employees create a ton of value, but that value is only fully captured 10+ years down the road.
And when there are tens or hundreds of billions of dollars of value created, the hawks will circle.
It definitely happened at say Google. Early employees didn't capture the value they generated, while later employees rode the wave of the early success. (I was a middle-ish employee, neither early nor late)
So basically the early OpenAI employees created a ton of value, but they have no mechanism to capture the value, or perhaps control it in order to "benefit humanity".
From here on out, it's politics and money -- you can see that with the support of Microsoft's CEO, OpenAI investors, many peer CEOs from YC, weird laudatory tweets by Eric Schmidt, etc.
The awkward, poorly executed firing of the CEO seems like an obvious symptom of that. It's a last-ditch effort for control, when it's become obvious that the game is unfolding according to the normal rules of capitalism.
(Note: I'm not against making a profit, or non-profits. Just saying that the whole organizational structure was fishy/dishonest to begin with, and in retrospect it shouldn't be surprising it turned out this way.)
this makes a lot of sense. I wonder if board's goal in firing Sam was to make everyone (govt., general public,) understand for-profit motives of Sam and most employees at this point.
Either Sam forms a new company with mass exodus of employees, or outside pressure changes structure of OpenAI towards a clear for-profit vision. In both cases, there will be no confusion going forward whether OpenAI/Sam have become a profit-chasing startup.
Chasing profits is not bad in itself, but doing it under the guise of a non-profit organization is.
Thank-you. Not a lot of times remind me of this heady stuff, but the comment did. So here goes.
---
A Nobel Prize was awarded to Ilya Prigogine in 1977 for his contributions in irreversible thermodynamics. At his award speech in Stockholm, Ilya showed a practical application of his thesis.
He derived that, in times of superstability, lost trust is directly reversible by removing the cause of that lost trust.
He went on to show that in disturbed times, lost trust becomes irreversible. That is, in unstable periods, management can remove the cause of trust lost--and nothing happens.
Since his thesis is based on mathematical physics, it occupies the same niche of certainty as the law of gravity. Ignore it at your peril.
The problem, though, is without the huge commercial and societal success of ChatGPT, the AI Safety camp had no real leverage over the direction of AI advancement worldwide.
I mean, there are tons of think tanks, advocacy organizations, etc. that write lots of AI safety papers that nobody reads. I'm kind of piqued at the OpenAI board not because I think they had the wrong intentions, but because they failed to see that the "perfect is the enemy of the good."
That is, the board should have realistically known that there will be a huge arms race for AI dominance. Some would say that's capitalism - I say that's just human nature. So the board of OpenAI was in a unique position to help guide AI advancement in as safe a manner as possible because they had the most advanced AI system. They may have thought Altman was pushing too hard on the commercial side, but there are a million better ways they could have fought for AI safety without causing the ruckus they did. Now I fear that the "pure AI researchers" on the side of AI Safety within OpenAI (as that is what was being widely reported) will be even more diminished/sidelined. It really feels like this was a colossally sad own goal.
I agree. I think a significantly better approach would have been to vote for the elaboration of a "checks and balances" structure to OpenAI as it grew in capabilities and influence.
Internal to the entire OpenAI org, sounds like all we had was just the for-profit arm <-> board of directors. Externally, you can add investors and public opinion (basically defaults to siding with the for-profit arm).
I wish they worked towards something closer to a functional democracy (so not the US or UK), with a judicial system (presumably the board), a congress (non-existent), and something like a triumvirate (presumably the for-profit C-suite). Given their original mission, it would be important to keep the incentives for all 3 separate, except for "safe AI that benefits humanity".
The truly hard to solve (read: impossible?) part is keeping the investors (external) from having an outsize say over any specific branch. If a third internal branch could exist that was designed to offset the influence of investors, that might have resulted in closer to the right balance.
I think the idea of separate groups within the company checking and balancing each other is not a great idea. This is essentially what Google set up with their "Ethical AI" group, but this just led to an adversarial relationship with that group seeing their primary role as putting up as many roadblocks and vetoes as possible over the teams actually building AI (see the whole Timnit Gebru debacle). This led to a lot of the top AI talent at Google jumping ship to other places where they could move faster.
I think a better approach is to have a system of guiding principles that should guide everyone, and then putting in place a structure where there needs to be periodic alignment that those principles aren't being violated (e.g. a vote requiring something like a supermajority across company leadership of all orgs in the company, but no single org has the "my job is to slow everyone else down" role).
I like this idea, but I'm not sure if "democracy" is the word you're looking for. There's plenty of functioning bureaucracies in everything from monarchies to communist states that balance competing interests. As you say, a system of checks and balances balancing the interests of the for-profit and non-profit arms could have been a lot more interesting. Though honestly I don't have enough business experience to know if this kind of thing would be at all viable.
Better to have a small but independent voice that can grow in influence then being shackled by commercial interest and lose your integrity - e.g. How many people actually gives a shit what Google has to say about internet governance?
> That is, the board should have realistically known that there will be a huge arms race for AI dominance. Some would say that's capitalism - I say that's just human nature. So the board of OpenAI was in a unique position to help guide AI advancement in as safe a manner as possible because they had the most advanced AI system. They may have thought Altman was pushing too hard on the commercial side, but there are a million better ways they could have fought for AI safety without causing the ruckus they did.
If the board were to have any influence they had to be able to do this. Whether this was the right time and the right issue to play their trump card I don't know - we still don't know what exactly happened - but I have a lot more respect for a group willing to take their shot than one that is so worried about losing their influence that they can never use it.
Why should anybody involved put up with this sort of behavior? Smearing the CEO? Ousting the chairman? Jeopardizing key supplier relationships? It’s ridiculous.
Because they're right. Maybe principles other than, "Get the richest," are important when we're talking about technology that can end the world or create literal hell on Earth (in the long term).
One wishes someone had pulled a similar (in sentiment) move on energy companies and arms suppliers.
> Why should anybody involved put up with this sort of behavior? Smearing the CEO? Ousting the chairman? Jeopardizing key supplier relationships?
Whether it was "smearing" or uncovering actual wrongdoing depends on the facts of the matter, which will hopefully emerge in due course. A board should absolutely be able and willing to fire the CEO, oust the chairman, and jeopardize supplier relationships if the circumstances warrant it. They're the board, that's what they're for!
They are the last survivor offering an alternative to Blink and seem to be indefinitely sustainable for their core product.
Other companies tried competing against Chrome, and so far Mozilla is the most successful, as everyone else gave up and ship Chrome skins people basically only use by subterfuge or coercion. I'd say that's pretty good.
*Challenges and Adaptation:* Mozilla Corporation has faced financial challenges, leading to restructuring and strategic shifts. This includes layoffs, closing offices, and diversifying into new ventures, such as acquiring Fakespot in 2023
*Dependence on Key Partnerships:* Its heavy reliance on partnerships like the one with Google for revenue has been both a strength and a vulnerability, necessitating adaptations to changing market conditions and partner strategies
*Evolution and Resilience:* Despite challenges, Mozilla Corporation has shown resilience, adapting to market changes and evolving its strategies to sustain its mission, demonstrating the effectiveness of its governance model within the context of its organizational goals and the broader technology ecosystem
In conclusion, while both OpenAI and Mozilla Corporation have navigated unique paths within the tech sector, their distinct governance structures illustrate different approaches to balancing mission-driven goals with operational sustainability and market responsiveness.
> This is why, when you claim to be running a non-profit to "benefit humankind," you shouldn't put all your resources into a for-profit subsidiary.
To be frank, they need to really spell out what "benefitting mankind" is. How is it measured? Or is it measured? Or is it just "the board says this isn't doing that so it's not doing that"?
They should define it, sure. Here's what I'd expect this means:
- Not limiting access to a universally profitable technology by making it only accessible to highest bidder (e.g. hire our virtual assistants for 30k a year).
- Making models with a mind to all threats (existential, job replacement, scam uses)
- Potentially open-sourcing models that are deemed safe
So far I genuinely believe they are doing the first two and leaving billions on the table they could get by jacking their price 10x or more.
From my time working on search related problems at Google, this might be a bit of a winner take most market. If you have the users, your system can more effectively learn how to do a better job for the users. The interaction data generated is excludable gold, merely knowing how 100s of millions use chat bots is incredibly powerful, and if the company keeps being the clear and well known best, it's easy to stay the best, because the learning system has more high quality things to learn from.
While google did do a good job milking knowledge and improving from its queries and interaction data, openai surely knows how to get information from high quality textual data even better.
Openai made an interface where you can just speak your natural language, it didn't make you learn it's own pool of keyword jargon bastardized quasi command language. It's way more natural.
If a model is not safe, the access should be limited in general.
Or, from a business model perspective; a 'sane' nonprofit doing what OpenAI should, at least in my mind, be able to do the following harmoniously:
1. Release new models that do the same thing they make others allow access do to via their 'products' with reasonable instructions on how to run them on-prem (i.e. I'm not saying what they do has to be fully runnable on a single local box, but it should be reproducible as a nonprofit purportedly geared towards research.)
2. Provide on-line access to models with a cost model that lets others use while furthering the foundation.
3. Provides enough overall value in what they do that outside parties invest regardless of whether they are guaranteed a specific individual return.
4. Not allow potentially unsafe models to be available via less than both research branches.
Perhaps, however, I am too idealistic.
On the other hand, Point 4 is important, because we can never know under the current model, whether a previous unsafe model has been truly 'patched' for all variations of a model.
OTOH, if a given model would violate Point 4, I do not trust the current org to properly disclose the found gaps; better to quietly patch the UI and intermediate layers than ask whether a fix can be worked around with different wording.
I've yet to hear what, exactly, underlies the sneering smugness over the notion that the board is going to get their asses handed to them. AFAICT, you have a non-profit with the power to do what they want, in this case, and "corporate titans" doing the "cornered cat" thing.
From the article: "...but Microsoft hasn’t decided whether it wants board representation..."
This is not a good sign. Microsoft the largest $10Bn investor, who is in the middle of pushing through the restructuring of the company, hasn't decided if they want board representation? The only reason you would do that if they want to keep their options open, in the future, to hit OpenAI hard (legally and/or to raid the personnel).
Board representation would come with a fiduciary responsibility and it looks like they may not want that. I could only imagine the intensity of Microsoft senior engineers screaming that they could replicate all of this in-house (not saying whether it's justified or not).
Microsoft is looking amateur. Even I the layman can take one look at OpenAI's ridiculously convoluted structure and it's laughably threadbare and ill-equipped board members and know something is wrong.
Microsoft should have looked at this and forced them to clean up their act before getting in bed with them. Now they're embroiled in the bush league shenanigans.
Microsoft wanted to flip a non-profit explicitly founded for the good of humanity into a for-profit implicitly for the good of Microsoft. That’s already a lot of shenanigans.
Everyone keeps saying that Microsoft made a huge mistake by not having board representation, and I couldn't disagree more.
Microsoft's relationship with OpenAI was really ideal from a speed-of-advancement perspective. That is, reams and reams have been written about how Google has to move at such a slow pace with AI productization because, essentially, they have so much to lose. Microsoft saw this first hand with their infamous Tay AI bot, which turned into a racist Hitler lover in a day.
Microsoft's relationship with OpenAI was perfect - they could realistically be seen as separate entities, and they let OpenAI take all the risk of misaligned AI, and then only pull in AI into their core services as they were comfortable. Google's lack of this sort of relationship is a direct hindrance to their speed in AI advancement and productization. Microsoft's lack of a board seat gives them a degree of "plausible deniability" if you will.
Plus, it's not like Microsoft's lack of a board seat impacts their influence that much. Basically everyone believes that the push to get Altman back has Microsoft's/Nadella's fingerprints all over it. Their billions give them plenty of leverage, and my bet going forward is that even if they don't take a board seat outright, they will demand that board membership be composed of more "professional", higher caliber board members that will likely align with them anyway.
Not necessarily. They still would have needed to have a majority of board seats on their side - I mean, Brockman was chairman of the board and he didn't find out about all this until the machinations were complete.
Read a good a article about the history of the OpenAI board that argued this all went down due to the recent loss of 3 board members, bringing total board membership from 9 to 6 (including losses like Reid Hoffman, who never would have voted for something like this), and Altman wanted to increase board membership again. Likely the "Gang of Four" here saw this as their slim window to change the direction of OpenAI.
Possibly but not certainly. It would have deadlocked the vote if a board member would have been replaced and it would have still passed if Microsoft had had an extra seat (4:2 -> 3:3) vs (4:2 -> 4:3) assuming the Microsoft representative to the board would vote against having Altman removed.
What I'm fairly sure of though is that if the board had been stocked with heavyweights rather than lightweights that this would have been handled in a procedural correct way with a lot more chance that it would stick.
They may not want it for reasons all their own such as to avoid being seen as conflicted or privy to information that they aren't supposed to have just in case they end up competing with OpenAI. Microsoft legal is in the driving seat for things like this not tech or finance.
Our board
"OpenAI is governed by the board of the OpenAI Nonprofit, comprised of OpenAI Global, LLC employees Greg Brockman (Chairman & President), Ilya Sutskever (Chief Scientist), and Sam Altman (CEO), and non-employees Adam D’Angelo, Tasha McCauley, Helen Toner."
There is also a prominent red notice that seems made for somebody in Seattle...
IMPORTANT
*Investing in OpenAI Global, LLC is a *high-risk investment*
*Investors could lose their capital contribution and not see any return*
*It would be wise to view an investment in OpenAI Global,LLC in the spirit of a donation, with the understanding that it may be difficult to know what role money will play in a post-AGI world*
Does the story of Sam raising funds from Saudi investors to start a AI Chip company have any relevance to what happened or is that just a nothing-burger? I don't any people on HN discussing it today
I mean a good question is how would this be structured. Would it be a new SamAltman company or was it to be part of OpenAI? If so, was it to be a profit or non-profit?
The thing is starting to look more and more like, become the biggest name in AI through claiming non-profit status, leverage the brand to go for profit.
This seems to me like an example of how difficult it is to organize a company around a goal other then making money. As a non-profit, OpanAI was not supposed to be a profit maximizing enterprise. But how is a board supposed to opperate, and set objectives without a clear goal like profit maximization? Usually a boards represents the owners of the business and their interested. The OpenAI board does not represent the owners because there are no owners. So the board is just 6 people and their opionions. Hard to see how that can work.
Corruption. 7 guys in a room may think they genuinely think one way. But change that picture to 7 guys in a room, alongside $10 billion dollars, and some, if not most, are suddenly going to start thinking in an entirely different way - even without needing to be 'persuaded.' Sums up politics and many other issues in society as well. People themselves don't even know who they are, until they have the freedom and resources to be whatever they want to be.
The difficulty, in this case, doesn't seem to lie in the chosen objective. Rather, the investors, ex-CEO, and many, many employees don't believe in the non-profit objectives. They want money and power. So, it seems like the difficulty is building an organization where people willingly reject money and power when presented with the option.
For a profit company, you buy shares and elect the board, right? If they turn the company into something nobody wants, the shares lose value, and maybe someone picks up the shares cheap and turns it around. So there's a feedback cycle. The profit motive is almost incidental.
But here do the current board members just appoint the next board members and grow/shrink the total number?
You'd need to have access to the various legal docs of the OpenAI non-profit to be able to answer that question. But there was a move in the works to expand the board after a recent resignation. This is why - normally - the number of directors on the board is specified in those documents and a quorum of a minimum number of people is required to be able to make certain decisions, especially ones with potentially far reaching consequences.
Being a board member is something you normally take quite serious. I've been asked a couple of times but didn't see myself as qualified to fulfill that role in a responsible manner. Board members are free to search for outside feedback but they're supposed to be wise enough to know their own limits because they have some residual liability for any mistakes they make.
Depending on where you live you will open yourself up for at least the consequences of your own actions (negligence, errors of judgment) and possibly even for the errors of other board members because you are not only there to oversee the company, you also oversee the other board members. That's why on the spot board resignations are usually a pretty bad sign unless they are for health or other urgent personal reasons. It is a very strong signal that a board member feels that they have not been able to convince their colleagues that they are off the straight and narrow path and that their choices exceed their own thresholds for ethics or liability (or both...). And that in turn is one of the reasons why a board would normally be very upset if they feel that they have not been given all information they require to do their job and that was the very first line that the board trotted out as to why Altman was let go. But even then they should have built their case rather than just to take a snap poll at a point in time when they had a quorum to get rid of him because it seems that that and not Altman's behavior (which as far as I can see was fairly consistent from day #1) was the real reason they did what they did. In the original board (9 people) the four didn't have the vote but in the shrunk board (6) they did.
The main question at this point should be whether Sam's model or Ilya's model is more likely to succeed in the primary goal of the teams, which is (and should be): how do we stay on the leading edge long enough to figure out safety. All the adults in the room want safety [1], it's just how to really get it.
There are very wealthy competitors out there, any of which could end up beating OpenAI if they get half an edge. If you don't beat them, you don't get to figure out safety.
If Sam starts another company, you know deep in your soul he'll have all the backing he could ever dream of. Everyone who can sign a check is dying to get in on this. All the smart talent in the world would love to be employee number 1 through 1000. He's figured that you need the money if you want to stay in the game and he's world-class at making that happen. If OpenAI has all the purity of conviction and never gets another dollar because it all flows to SamCo...do they still win and figure out safety?
(Plus get some profits, attract staff who want to make bank, get full control of the board anyway, etc)
[1] We're nowhere near GPT controlling nukes, elections or the bond market, or desiring to. We need at least a couple massive algo changes before things take off. So some speed at this point isn't thaaat dangerous.
The alternative is to work for poor idealists, which isn’t generally very good for putting bread on the table.
Something tells me most people are going to go for fucktons of money and working on what they think is interesting. Even if it makes other people even more money.
I really hope Altman doesn't return to his role. It is nice to see some people showing some spine and standing up to business interestes. "Open"AI is and was a lie for the longest time.
IME that's been common or the norm in plenty of circles for more than 25 years (so probably longer). It became less common when people started phone-posting and when phones started auto-capitalizing by default.
To be fair, we're on a geek website. Not many have default settings on any of their devices.
It think the above is consequence of a "I can afford to write all lowercase/do unconventional thing X". And by "afford" I think here more like "I don't have bosses, nor do I have to please anyone doing conventional things".
There was an article or a discussion here a while ago how in a organizational pyramid the people on the botton usually write as normal/nice as possible, while going upwards; people can afford themselves to write however the like, including being super rude if they choose so.
it is inconsistent in language usage to write differently than to speak. we don’t speak big sounds, that’s why we don't write them either. and: doesn’t one say the same thing with one alphabet as with two alphabets? why does one merge two alphabets of completely different characters into one word or sentence and thereby make the written image inharmonic? either large or small. the large alphabet is illegible in the typesetting. therefore the small alphabet. and: if we think of the typewriter, the limitation to lower case characters means great relief and is time saving. and if we think further, it would be simplified by switching off upper case characters.
the choice at this moment is simple - either Altman goes away with some employees and starts new venture while the Ilya's side gets the rest of the employees plus the whole OpenAI IP, resources, etc., or Ilya goes away with some employees and starts new venture while Altman gets the rest of the employees plus the whole OpenAI IP, resources, etc. Right now Ilya's side has the Open AI - it would be strange if they just give it away to Altman.
> With his firing from OpenAI, Altman quickly got the upper hand in terms of public messaging. The board didn’t use a communications or law firm in its dealings, people familiar with the board said, expecting that the OpenAI team would help them. But Altman had loyalty from investors and employees.
It was, but still: perception matters and Sam has - contrary to the board - not made many wrong moves since this whole thing started. Note that this is all extremely rapidly developing and it's very hard to do all of this on the fly when you don't expect to even be in this situation and not miss a beat.
The board however has only made things ever more murky with their vague and fig-leaf like defenses. They appear utterly unprepared to deal with the aftermath of their own action, which they and only they knew was coming.
If they had said "Sam has helped OpenAI tremendously over the years, and we thank him for his invaluable contributions. Going forward, he will be using his skill set to pursue other opportunities, while OpenAI continues its mission to advance AI for the betterment of humanity."
That would at least made it seem like they knew what they were doing.
But investors would have still beaten them over the head.
Yes, that's true. But that's exactly the kind of thing that would have at least given it a semblance of being an orderly decision and it would have stopped a ton of negative press in its tracks.
It's like a dance, you can't just go do the Polka if everybody expects a Waltz, that's going to attract a ton of attention. They should have gotten out in front of any potential blowback to ensure that even if their decision was by the book (which for their sake I hope it was) that it also didn't ruffle the scales of very large dragons.
At the risk of being off topic (well, definitely):
> in a post on X, formerly Twitter
It keep surprising me that someone can so completely torpedo their brand that news organisations feel compelled to keep referring to the old name so people have some idea of what they’re talking about.
The last time I recall the press using similar language was when the subject was intentionally damaging their own brand due to a contract grievance: "[glyph] (the artist formerly known as Prince)"
Well, and it's an easy hedge against, if or when Elon goes bankrupt enough that he has to sell it, the new owner immediately renaming the service back to Twitter.
Why are people suddenly jacking Altman off so hard? What has this non-technical dude done other than trying to corner the AI market for himself and implementing insane bullshit like worldcoin for him to deserve such weird devotion?
So the dude with a billion connections managed to use them to get cash for the startup that deals with the most insanely overhyped piece of technology to have ever been created.
No mention of the people actually responsible for any of this? Y'know, the scientists and engineers that actually had to do something to create the crazy technology he's taking credit for, the noteworthy dude is the generic MBA C-suite type that managed to not screw the pooch when given a team of the brightest minds out there?
> the most insanely overhyped piece of technology to have ever been created.
Copying and pasting code from ChatGPT, I created a functional iOS app today based on my design. I have never before written a mobile app, any Swift code, or much of any code aside from Power Apps, in at least 15 years.
I didn't comment on whether the tech is good or bad, I just said it's overhyped (and you more or less proved my point).
And again, what exactly has Sam himself done to bring about the tech? Without Ilya and the rest of the engineers and researchers you wouldn't have been able to copy/paste the code, why exactly is Sam the one that gets the credit here?
I was responding only to the part I quoted, which I read as criticism of the tech.
I am not necessarily an Altman hype man. As an obvious outsider, my best bet as to why he came back so strong was because apparently many researchers (employees) said they would leave as well. I can't read peoples' minds but I can infer a a bit based on human financial interest.
The employees with equity were very close to a liquidity event at a valuation of $86B. That is likely life changing money for many, and this whole Altman getting fired mess put that life changing money on hold.
I wonder if his ouster had been done in a more sane/stable way, if things could have kept chugging along without him.
As far as the average HN opinion, I donno, I have seen many upvoted comments saying... yeah, he's just the CEO.
And I think you overestimate how much people care about CEO/board politicking. Will OpenAI's APIs still exist? Yes? I guarantee you no one other than other CEOs hoping to ride on Sam's coattails will give a shit then
Do yourself a favor and integrate GPT4 into your workflow.
I use it probably 20 times a day at this point.
example:
"I ran performance tests on two systems, here's the results of system 1, and heres the results of system 2. Summarize the results, and build a markdown table containing x,y,z rows."
"extract the reusable functions out of this bash script"
"write me a cfssl command to generate a intermediate CA"
"What is the regex for _____"
"Here are my accomplishments over the last 6 months, summarize them into a 1 page performance report."
etc etc etc
If you're not using GPT4 or some LLM as part of your daily flow you're working too hard.
Get GPT4All (https://gpt4all.io), log into OpenAI, drop $20 on your account, get a API key, and start using GPT4.
I had a subscription but found it useless for any actual difficult problem that you can't just find elsewhere online with a bit more effort, and I don't tend to struggle with trivial shit. Also not interested in being a "prompt engineer".
But still, irrelevant, I didn't comment on whether it's good or bad, I just said it's overhyped, and comments like these never fail to come up when someone says anything even slightly negative about the tech.
I always figured it’s because HN is run by Y Combinator, a technology startup accelerator, so inevitably many of the folks here will be more pro entrepreneurship than non-profit.
I've always viewed it as more of a hacker's hangout with the unfortunate side-effect of being YC-sponsored being that you sometimes have to put up with some dumb corporate news here and there, but it seems to be going in a slightly different direction for a while now (or more likely I was just always wrong on this one :P).
There is an enormous YC bias here. ShowHN posts from new YC startups are upvoted instantly whilst most from outside are ignored - the ones you see are the exception. The HN rules state you are not meant to play the game of asking people to boost your post but it seems like time and time again they co-ordinate, or allow it, for the insiders.
I think Show HN posts from YC startups are actually boosted by the system. Same with the hiring posts for YC startups. So it's not people, but explicitly coded in.
Why do you say "non-technical" ? That's clearly wrong. Just because someone is no longer hands on keys, doesn't make them non-technical. Are larry and sergey non-technical? The collison brothers? None of them are hands on keys anymore.
The answer to the question is - even people who don't like him realize he's a smart guy and this was a dumb move, and it was done in an amateurish way by a board out of their depth.
No longer? Sam has never substantially dealt with the underlying code for OpenAI, compared to say Demis Hassabis. He resembles Brian Chesky & Joe Gebbia more than Nathan Blecharczyk.
That's the part that gets me, the same people who would react to news of "13,000 employeess laid off" as a brilliant strategic move are the same ones acting as if Sammy boy getting sacked here is the biggest affront ever committed is shocking to me.
it's probably the main reason why CEOs get paid so much: They aren't just workers anymore, they are public figures with fans and followers that support them beyond their objective work value
The fact that you think Altman came around yesterday just shows that you live under a rock. The fact that you think “non-technical” people don’t contribute meaningful value is also worrisome.
Historically, whenever there was an inflection point for a new technology, you'd have lots of people working on the same problem, and whoever cracks the problem first is put of pedestal. Revisionist history is built around that one person asserting that if it weren't for them, that said technology would never have materialized.
We're just seeing a variation of that playing out live. There are multiple teams working on AI, ChatGPT got "there" first and now we have a single heroic figure to worship. Personality cults seem to be a part of the quintessential human condition.
People love worshipping idols and saints, even when the demographic is largely irreligious.
Plus, the tech scene is extremely prone to hopping on trends and then taking it way too far. If you want some real cringe, check out @varun_mathur's long Twitter post from Nov 18th.
Although at its core, firing Altman under current circumstances was still a poorly thought-out decision which evidently caused the event itself to become a major centre of attention.
Many are speculating that Sam Altman could just move on and create another OpenAI 2.0 because he could easily attract talent and investors.
What this misses is all the regulatory capture that he’s been campaigning for. All the platforms have now closed their gardens. Authors and artists are much more vigilant about copyright etc. So it’s now a totally different game compared to 3 years ago because the data is not just there up for grabs anymore.
I've never considered this angle, but god it'd be hilarious if this ended up being the case, the dude ruining everything because of his own greed ultimately fucking himself over because of it.
Here's to hoping there's still some poetic irony left to dish out in the world.
"Why Sam Altman (who can have the funding, talent, and the vision OpenAI has right now) can't just create OpenAI 2.0?" is an amazing question that also answers whats OpenAI's moat.
People speculated it was the funding, or attracting talent or having "access". Turns out it was none of them (obviously they all have a part, but having all three doesn't mean you can best OpenAI which gives you the fundemental reason why it is so hard to compete with them).
Is the logic here that training a base model isn’t as easy or even possible in the same way that OpenAI did in the past, and that what they have in a trained model is valuable in that even with all the code and experience it couldn’t be reproduced today with new restrictions?
The data has to come from somewhere, and all of the outlets that were used to train ChatGPT, stable diffusion, etc. have since been locked down. Any new company that Sam Altman makes in the AI space won't be competing just on merits of talent and product, they will also need to pay for and negotiate access to data.
I'd actually expect this to get far worse going forward, now that other organizations have an idea of how valuable their data is. It's also trivial to justify locking it down under the guise of protecting people, privacy, etc.
Do BoringAI or LibreAI and it's just a fork but you ripped out all the old, bad stuff. (This joke doesn't really work because OpenAI is not really old enough for legacy cruft and isn't actually open enough to just be forked)
I don't think getting training data is that hard still, the biggest platforms that locked down their APIs still use them for their mobile apps and can easily be reverse engineered to find keys or undocumented endpoints (or in the case of reddit, an entirely different internal API with less limits and a lot more info leaks...)
Assuming the Reddit app does not use certificate pinning, you can use your computer to provide internet to your phone and then use an app like Charles Proxy to inspect requests being made from an app. Pretty easy to reverse engineer the API.
If the app does use certificate pinning, then you can use an Android phone and a modified app that removes the logic that enforces certificate pinning. This is more involved but also not impossible.
That does not sound like the proper way to do an openAI 2.0. If Reddit ever hears that's how an AI company scraped them, they'll get sued for fun and profits.
The point is that the data is easily accessible. If you wanted to get your hands on the data while simultaneously keeping them clean, contract with a Russian contracting company to give you a data dump. You don't need to know how they got it.
They make a point out of not directly asking for the crime when they do that. Just increasing pressure on subcontractors that leads to cutting corners including the law.
It is harder to prove to a "should have known" standard compared to say buying stolen speakers from the back of a truck for 20% of the list price.
There’s an implicit assumption in your argument that you’re going to directly ask for a crime to be committed. Why are you assuming that? You’ll go to a contractor and say “we want Reddit data.” Anyone with even mild technical competence can figure out how to get it.
Llms know the contents of books because they are analyzed, reviewed and spoken about everywhere. Pick some obscure book that doesn't show up on any social media and ask about it's contents. GPT won't have a clue
Did you read the article (this one misstates the case but if you look at the one linked about the lawsuit)? This is a lawsuit. Nothing has been proven. Burden of proof is on you
It's essentially impossible to prove in court that training data was obtained or used improperly unless you go and tell on yourself. And even then it requires you to actually make someone with a lot of money mad, or to not have enough money yourself. Certainly microsoft would have already caught lots of flak for training their models on every github repo, instead they got a minor paddling from the public eye that went away after not much time had passed.
Text really doesn't take up that much space, and in addition it compresses pretty well.
The entire English language Wikipedia is only around 60GB in a format that can be readily searched and randomly accessed (ZIM), for example: https://kiwix.org/
Does Kiwix actually work? I see people hyping it here but I could never get it to actually, y'know, download the file and display the wikipedia on my phone.
Kiwix worked for me. IIRC there may be difficulties opening an archive that was downloaded outside of the mobile app, but archives downloaded in-app were fine.
For the mobile app I used one of the smaller Wikipedia subsets, since I didn't want to take up too much space on my phone. The full offline Wikipedia download is saved to my laptop.
You are assuming he wouldn't steal t from OpenAI. He could have a low level employee steal it, and manage to keep it a secret until AGI is born then he takes over the world.
This is a pretty wild comment. That's a very safe assumption and no low level employee will do Sams bidding in an illegal enterprise. And keeping it a secret isn't going to work either and whether or not AGI is 'born' (who will bear it) is an open question to which I hope the answer is 'not for a while'. Because we haven't even figured out how to get humans to cooperate which I think should be a prerequisite.
> no low level employee will do Sams bidding in an illegal enterprise
Many people have betrayed their country to foreign governments in exchange for mere thousands of dollars. It is never safe to rule out the willingness of employees to engage in corporate espionage, even in exchange for truly pitiful rewards. It would be a stupid idea, but that doesn't mean it won't happen.
What has been crawled stays crawled and there are plenty of copies of sets of tokens that can be used to retrain a model. For a bit of money you can probably get any set that you really want (bit: billions, but pocket change for anything that is going to go head to head with OpenAI).
OpenAI has enough momentum and built enough moat that Sam Altman cannot replicate it. If he can actually replicate it and over take openai, then the business itself has no legs as it will be easily commoditized and any moat nullified in no time
I'm building a magazine encyclopedia and I would estimate that 99.9% of all magazines ever published are not available electronically. And that the content in magazines probably exceeds the content in books by an order of magnitude.
I know this is getting off-topic, but as a non-native speaker, I'm interested in hearing how a third data point would be needed to judge whether things differ "by an order of magnitude".
I was under the impression that "an order of magnitude" meant "one more digit", meaning very roughly a 10x difference. "a >= 10*b" can be determined without the need of a third data point. Is there some other meaning to the phrase I haven't come across?
Not the original poster, but you have it more or less correct. An order of magnitude is 10X. Orders of magnitude just refers to “at least 100X.” Colloquially, orders of magnitude just means “significantly more/less.”
What are the exact reasons for Ilya to fire Sam Altman is something I and half a million of other folks are more interested than the question of whether he can replicate open ai or over take it via new venture. Any takes in this thread?
reply