AGI has been here since AlphaGO. Humans were just bad at understanding the nature of intelligence. What is happening now and will continue for a few years is rewriting the definition of AGI by the luddites. They will keep moving the goal posts.
> rewriting the definition of AGI by the luddites.
The Luddites were a group of working class people whose livelihoods were destroyed by the invention of the Jacquard loom. They were eventually suppressed with government violence.
If your comparison is accurate, they really should be worried!
If you put a Go Board in front of a human who was raised by wolves in the jungle, would he play it? Or would you have to train him on how to play it? Would he not be generally intelligent if he didn't know how to play go with zero training?
> *also to be clear I don’t mean to say achieving agi with gpt5 is a consensus belief within openai, but non zero people there believe it will get there.
Uh, that's burying the lede to say the least. "Non-zero" could mean one person.
Yeah, I think he's just making this up and this is his opinion.
Later down he says "someone who didn't; (source, I made it up reaction image)".
And I'm thinking: if you have to pointedly say that it's not made up, it's probably made up.
For what it's worth, there's an interview slightly farther down in which Altman says that he does not think GPT-5 will achieve AGI, and that no GPT will, likely, because he thinks an AGI needs some degree of autonomy and "want".
Either I don't understand what AGI is or openai doesn't understand what AGI is. Because of the way gpt 3,4,5 or even 1000 works it can't be possible for AGI to be achieved. How could we say that we've got AGI when the "intelligence" is a synthesis of existing knowledge, without even ever understanding what it means?
If real AGI is to be achieved it can only be through a completely different paradigm (don't ask me which) definitely not through machine learning.
My opinion is that this whole machine learning thing will make achieving AGI even more difficult since most resources have switched their focus on that instead of trying to achieve real artificial intelligence (if that's even possible anyway).
You may have an opinion, but can you justify why you don't believe chatGPT will achieve AGI?
Existing scaling laws show that perplexity (i.e., ability to predict next tokens in text) can be lowered by increasing model size and adding more data.
Why would a model that is better than us at predicting what words would occur next in arbitrary text not be an AGI?
Whilst I don’t think that GPT-5 will be AGI this is a very poor reasoning.
“knowing what it means” isn’t measurable and if anything lately it seems like it’s more and more a coping mechanism so we can still remain special.
There is absolutely nothing currently that can prove that human intelligence isn’t just a very large LLM or that an LLM can’t model it sufficiently.
Hallucinations might actually be the missing piece we humans are full of shit and come up with crazy explanations for things all the time.
However we do have various methods of testing these explanations which we often call theories and hypotheses both through running dedicated experiment as well as just participating in the grander social experiment which is civilization.
"OpenAI" does not believe that it will be AGI. There is an interview farther down in the thread in which Altman states that A) GPT-5 will not be AGI and B) the GPT paradigm likely has some fundamental problems which will not get it to AGI, and that if a GPT becomes AGI it is far off.
Later down, Siqi clarifies several things:
1. That he believes AGI is a continuum, and encompasses most stuff that isn't a narrow system. So what he is ACTUALLY saying is "GPT-5 is not a narrow system, and is near the bottom of the AGI continuum".
2. That this is NOT a consensus belief within OpenAI, but rather one only shared by a "non-zero number". This should be a BIG RED WARNING SIGN, because it likely means that the number is very low and he is puffing it up/twisting words.
3. Seeing as he went on to actively deny that it was from someone making it up, that means that it's likely this is just his opinion and maybe a few of his friends.
4. He went on to answer the "agency" question by stating that one could attach a goal loop to it. So a prototype paperclip maximizer. Greeeeeeat.
And regarding ASI ... seeing the destruction and stupidity that an overgrown autocomplete can cause, maybe, just maybe, it's a good thing if we aren't on track for that right now.
I just don’t see it. 3 and now 4 just seem to be an eloquent search engine. It cannot perform any sort of actual critical thinking, just regurgitation of its models.
Step 1: Redefine the term to mean something lesser (intelligence without consciousness)
Step 2: Declare "Nailed It!"
Step 3: Profit?
If AGI is mere intelligence (modeling and reasoning, absent motivation), then whatever they call GPT-5 could achieve it. If AGI like the original meaning of the term Artificial Intelligence means a conscious mind like Asimov's R. Daneel Olivaw or even R2D2, then I don't think GPT-5 will get there.
I truly believe there is a spark of intelligence in the current crops of LLMs. Probably because I wholeheartedly believe we are nothing but a bunch of neurons pieced together and there is nothing special about human.
But man, even I can't take this GPT5 will be AGI seriously. If they still use the LLM with transformer, then it will never be AGI. The transformer can't continuously learn and absorb new information during use. And it sure as hell won't stop hallucinating, imo until it can interact with the actual meatspace using some sorts of sensors and ground itself in our reality somehow.
The LLMs can at best be a very knowledgeable man with severe mental issues, probably isolated in a sensory deprivation room for too long. He still can be useful, but don't expect too much from a madman.
Any computer model that isn't hooked up to outworld sensors that enable human-like senses (touch, smell, vision, etc.) is not and cannot be anything even remotely close to AGI.
Pure text models will never have innate understanding of the physical world on a level that rivals human intelligence.
OP has been told that some people within OpenAI believe that once GPT-5 is there, it will totally reach AGI! Or we'll at least debate about it!
reply