The only reality in which Anthropic will take on OpenAI (et. al.), would be if someone involved possess some sacred knowledge regarding how to build an AGI system that is radically off the path that the current market is charging down (i.e. ever-larger GPU farms and transformer-style models).
I suspect this is not the case. The same hustlers who brought you crypto scams didn't just disappear into the ether. All of that energy has to eventually go somewhere.
If you strip out the AGI hype then this just sounds like OpenAI is now moving to monetizing their tech. This makes sense for them but probably not for the philanthropists who originally backed them.
Sadly for them, AGI is metaphysically impossible - this will be realized eventually but a lot of waste and possibly harm will happen first.
We are not just super sophisticated machines, so the fact that we can think doesn’t tell us anything about what’s possible for machines. But philosophy does - and it tells us you can’t get mind from matter, no matter what configuration you put it in.
I've been watching the Y Combinator YouTube channel recently, and people in the room when OpenAI was founded were saying "… until these big companies solve AGI, it's going to be their first, second, and third goal" — not that OpenAI or Anthropic would succeed, but that this is their goal.
Before the industrial revolution, land was paramount; after, capital. AGI is likely to be as significant a change, and make money (as we understand it today) less important in a similar way.
If OpenAI or another company theoretically ever managed to actually create AGI, I could see the argument for forcibly open sourcing it for the sake of humanity. But we're nowhere near there yet.
I'm a firm believer that greed triumphs over any other possible explanation about why things happen to be the way they are (not that I like it but true is true). So, I'm not surprised in the least that most OpenAI's (and everywhere else's) engineers follow the most profitable path.
It's a delusion to think that the creation of an AGI is something that could be prevented, or that it could be prevented so easily (by firing @sama, lol). The box is already open, and sooner or later somebody out in the world will witness the emergence of a superintelligence in front of its eyes. This is something that, for all practical purposes, needs to be accepted as a fact, so that appropriate measures to deal with it could come up as well.
So, being a bit generous to the OpenAI people, if AGI is going to happen anyway, they might better profit off it while also trying to steer things towards what they think is good, after all, they're presumably "the good guys".
If it's inevitable, better to be first than late. Personally, I don't think there will be AGI anytime soon or at all, and if there will be - OpenAI is not the one that will achieve it.. and if they do, they will not be the one controlling it.
Anyway, good luck. I am certain the pursuit of profitable monetary value will create AGI so the more people there are who want to make more money the faster we will all get to the singularity. This is documented by Marc Andreesen in his manifesto so I know it is the right path.
I highly doubt it. OpenAI, Google and Meta are not the only ones who can implement these systems. The race for AGI is one for power and power is survival.
It seems reasonable to say that AGI will take a ton of resources. You'll need investors for power, GPUs, researchers, data, and the list goes on. It's a lot easier to get there with viable commercial products than handouts.
I'd be willing to bet that between Sam's approach and the theorized approach of the OpenAI board we're discussing, Sam's approach has a higher chance of success.
We don't know how AGI will shake out or, especially, what it will even look like. What does out-competing AGI even mean? One of the most positive outcomes I can think of is AGI being aligned with human interests broadly, but it is doing so much that no one knows wtf is going on. A BCI can bridge that understanding gap to at least keep humans in the know and capable of steering the ship. If the competition is AGI wants to kill us, then we're probably hosed anyway.
That's why I said we need open source and open hardware alternatives. But there's no world where that kind of tech comes first. Companies are going to develop the cutting edge first, when has that not been the case?
I'm really, really happy to see that there's healthy competition in the space. Especially, ironically, from Meta who has just been so great on the open-source front for quite a while.
I don't think AGI is on the table at all. But if it does get created, it doesn't matter whether or not it's created by hobbyists. It would be so valuable that it will end up owned and controlled by one of the big guys in the end, regardless.
IMO, things are looking like somebody will pull AGI outside of their garage once computing gets cheaper enough, and all the focus on those monstrosities based on clearly dead-end paradigms will only serve to make us unable to react to the real thing.
> It's a lot harder to see AGI as being possible in the next few decades (the current state of the art is still a sensationalized party trick
That's precisely the mindset that drives climate change denial, applied to technological developments. I can promise you that OpenAI isn't valued in the tens of billions because they produce "party tricks". People with money and influence have clearly already recognized what this technology is capable of, not in some unspecified future but today. They're tightly controlling access to GPT-3 because they're worried that it could be used to manipulate elections and drive social unrest by mass-producing messages that promote specific ideologies. That's reality today. The damage that could be wrought by the most advanced AIs 15-20 years from now is unimaginable, and could easily destroy humanity even if they aren't self aware.
But it sounds like you and Ben are acknowledging that there is a potential problem here, which we should spend resources trying to prevent.
No one who has put any serious thought into AGI denies the possibility of an existential threat from it. The fact is though that we have absolutely no clue what chain of events would lead to that. In fact we have (almost) no clue what chain of events will lead to AGI in the first place.
The best way to know what will come from AGI is to start building it and seeing where the controls and capabilities are going. I personally think the AGI community needs to put more effort into a roadmap, which is done at every AGI conference to some degree. The problem though is that it's just too early and there just aren't enough researchers and funds to have any inkling about what is (are) going to be the best approaches to AGI. I could go all day and caveat everything, but the bottom line is, just start building because we are so far from it absolutely anything helps.
"once we have AGI systems that are massively smarter than people, they are going to do what they want to do, or what other powerful forces in the universe want them to do, but not necessarily what we want them to do." That's not the most reassuring comment ever.
Right. My personal proposition is:
AGI:Humans::Humans:Ants
Ben seems to share this approach. So to think we could control it after it's fifth iteration or so is silly. We also share the transhumanist philosophy which I think is still pretty fringe even within the tech community so I try not to go into that too much because it's just a whole ball of wax. I do think though that we need to have some of these conversations about what is next if human work is not necessary and later what we do if humanity itself is not at the top of the intelligence chain.
They will spend the next ten years drip feeding the latest model and no one will be any wiser.
As I understand it, the idea of AGI is that it will be able to “invent” stuff, that it will be able to piece together information to create something new. Is this not what they are basically selling?
And if so, in what world do they see themselves this ever being available to the public? There is literally zero percent chance AGI will ever be given to the public. The CIA/FBI will stash it in a black box and use it to continue exploiting the world with a few “here and there” releases for society.
I also don’t want to say I could be wrong, because that’s how I understand “AGI” based on what these people market it as. And in that case… OpenAI will keep taking it nice and slow so as long as no model gets close to them based on a pre-determined margin.
There is no reliable evidence that AGI is an existential threat, nor that it is even achievable within our lifetimes. Current OpenAI products are useful and technically impressive but no one has shown that they represent steps towards a true AGI.
I'm in total agreement about the potential of growing AGI out of these methods, but there will be bottlenecks well before the gods of the singularity come knocking.
I suspect this is not the case. The same hustlers who brought you crypto scams didn't just disappear into the ether. All of that energy has to eventually go somewhere.
reply