>You actually think of the most esteemed AI researchers will have trouble finding funding after this?
Plenty of those actually left already ...
Ilya is good but is one of many, and by many I mean there's 100s of equally capable researchers, many of those with more flexible morals. Note: I'm being generous to Ilya and taking him at face value on being the self-proclaimed AI messiah that is keeping us from the destruction of the world.
Thanks Ilya, but money is money and investors would definitely prefer to put their money in a for-profit than a non-profit. This is even more true after this whole fiasco.
> passionate researchers are ready to go find their next bold research opportunities elsewhere
Not supported by Ilya agreeing with board to fire Sam Altman.
I also think you'll struggle to find a majority of people thinking AI research's "ceiling is proving to be lower than originally hoped", what with 4o, SORA, GPT5 all coming and it's only been 1.5 years since ChatGPT.
> This overselling soaked up the funding with empty promises and killed more basic longterm research. Lets hope serious researchers find a way to get their research funded again, despite the AI shills.
I think this is uncalled for. What makes those who worked on some of these AI problems any different from the founders of an unsuccessful startup? Both have a belief that a particular idea/plan will work and both seek to convince others to join/fund them.
Nobody really knew that many AI problems would be so tough. The people who worked on them expected success. Only through their failures did we know for sure that the problems where a lot harder than we thought.
>I view this a final (and perhaps disparate) attempt from a sidelined chief scientist, Ilya, to prevent Microsoft from taking over the most prominent AI.
Why did Ilya sign the letter demanding the board resign or they'll go to Microsoft then?
> In short, instead of focusing on meaningfully advancing AI tech in a scientifically sound way, some board members sound like they're engaging in weird spiritual claims.
This has to be the dumbest hit piece I’ve ever read. Ilya has contributed more in a year to ML than most researchers will in a lifetime.
> Did he really fire Sam over "AI safety" concerns? How is that remotely rational.
Not rational iff (and unlike Sustkever, Hinton, Bengio) you are not a "doomer" / "decel". Ilya's very vocal and on record that he suspects there may be "something else" going on with these models. He and DeepMind claim AlphaGo is already AGI (correction: ASI) in a very narrow domain (https://www.arxiv-vanity.com/papers/2311.02462/). Ilya particularly predicts it is a given that Neural Networks would achieve broad AGI (superintelligence) before alignment is figured out, unless researchers start putting more resources in it.
(like LeCun, I am not a doomer; but I am also not Hinton to know any better)
> Do you see any VC investing in etichs in ML startups?
Perhaps you should pay less attention to VCs and more attention to governments and academic institutions, who for example in Canada are investing 10s of millions of dollars into AI ethics/FATE/AI for good research.
Sometimes, the point isn't just to make money, it's to actually improve humanity.
> Not sure why the AI community has a weird obsession with being non-profit (or not)
A lot of times, I find the obsession isn’t a desire for collaboration or anything noble, it’s as simple as this: people want to use the end products for free or very cheaply.
"...an international group of researchers has secured $1.6 billion..."
An international group of researchers has scammed $1.6 billion away. And that's with EU taxpayers money.
Do you really think that anyone involved in deciding to allow that kind of spending has any idea as to where we're at nowadays regarding AI? Did any of them read "On Intelligence" (its author knowing more than a thing or two about AI)?
I'm sure not. And I'm not happy my taxpayers dollars are funding this.
I'm all for research and fundings going to research.
But this one is going to be a gigantic waste not leading to anything. And in ten years people will apologize and explain why "x is not AI", "y is not AI" and why it was a gigantic waste.
On a positive sidenote $1.6 bn for the duration of this project is peanuts compared to the yearly $140 bn the UE is spending ; )
> this is such a transparent attention grab (and, by extension, money grab by being overvalued by investors and shareholders)
Ilya believes transformers can be enough to achieve superintelligence (if inefficiently). He is concerned that companies like OpenAI are going to succeed at doing it without investing in safety, and they're going to unleash a demon in the process.
I don't really believe either of those things. I find arguments that autoregressive approaches lack certain critical features [1] to be compelling. But if there's a bunch of investors caught up in the hype machine ready to dump money on your favorite pet concept, and you have a high visibility position in one of the companies at the front of the hype machine, wouldn't you want to accept that money to work relatively unconstrained on that problem?
My little pet idea is open source machines that take in veggies and rice and beans on one side and spit out hot healthy meals on the other side, as a form of mutual aid to offer payment optional meals in cities, like an automated form of the work the Sikhs do [2]. If someone wanted to pay me loads of money to do so, I'd have a lot to say about how revolutionary it is going to be.
EDIT: To be clear I’m not saying it’s a fools errand. Current approaches to AI have economic value of some sort. Even if we don’t see AGI any time soon there’s money to be made. Ilya clearly knows a lot about how these systems are built. Seems worth going independent to try his own approach and maybe someone can turn a profit off this work even without AGI. Tho this is not without tradeoffs and reasonable people can disagree on the value of additional investment in this space.
> If you really want free money as an AI startup just say you're going to solve safe / friendly AI. People throw money at that without even showing anything. Hugging Face actually recently put out a job for someone who can work in "bias mitigation": https://nitter.net/mmitchell_ai/status/1520483233132990464
This is the hardest part about going from Fintech to AI/ML for me, it all seems like it's vaporware and you really don't know where to apply your skillset from previous work in several Industries. My focus is on AI/ML based solutions for Supply Chain and Logistics, because it's already broken and needed a re-work for the last 2 decades.
But I'm realizing that there is almost no way to raise these kind of funds and still have a viable business model beyond just setting arbitrary benchmarks and re-tooling your term sheet to reflect the 'new way' things are done in this space now.
I'm still wondering if this twitter stream [0] is just a calloused AI guy ranting, or is it as prophetic as it was when I projected practically the same thing for enterprise based 'blockchains.'
My dillema is wondering how Tesla and Mercedes are all trying to solve FSD, but at the same time we have things like Kiwibot which rely on people piloting them via Wifi from developing countries and pay them sub-standard wages ($2-3/Hr) to deliver food to affluent college kids on campuses. We already have doordash and all of it's incarnations that are bloated with VC money, how can this be a thing?
She's the leading academic on the issue of AI safety. It's really ridiculous people don't even know her name and say random things about her, not realizing she's a rock star in her field.
Her leaving the board is a tremendous loss for OpenAI, about as terrible as Ilya leaving the board. These are two giants of AI currently.
> If Ilya really had deep concerns, he should have quit and built his own AGI dedicated research company.
...
You should look up some history here.
Exactly what you say has already happened and OpenAI is the dedicated research company you are referring to.
He originally left Google deep mind I believe.
> I’m of the belief that alignment of AGI is impossible.
I don't think most people in this space are operating based on beliefs. If there is even a 10% chance that alignment is possible, it is probably still worth pursuing.
>aiming to create a safe, powerful artificial intelligence system within a pure research organization that has no near-term intention of selling AI products or services.
Who is going to fund such a venture based on blind faith alone? Especially if you believe in the scaling hypothesis type of ai research where you spend billions on compute, this seems bound to fail once the AI hype dies down and raising money becomes a bit harder
>You actually think of the most esteemed AI researchers will have trouble finding funding after this?
Plenty of those actually left already ...
Ilya is good but is one of many, and by many I mean there's 100s of equally capable researchers, many of those with more flexible morals. Note: I'm being generous to Ilya and taking him at face value on being the self-proclaimed AI messiah that is keeping us from the destruction of the world.
Thanks Ilya, but money is money and investors would definitely prefer to put their money in a for-profit than a non-profit. This is even more true after this whole fiasco.
reply