Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login
OpenAI expects GPT-5 to achieve AGI (twitter.com) similar stories update story
14 points by baal80spam | karma 1962 | avg karma 2.91 2023-04-02 03:13:44 | hide | past | favorite | 29 comments



view as:

Well yes, that's nice and all, but on the other hand so vague that I wonder if it wouldn't qualify for HN's "announcement of an announcement" rule.

OP has been told that some people within OpenAI believe that once GPT-5 is there, it will totally reach AGI! Or we'll at least debate about it!


AGI has been here since AlphaGO. Humans were just bad at understanding the nature of intelligence. What is happening now and will continue for a few years is rewriting the definition of AGI by the luddites. They will keep moving the goal posts.

> rewriting the definition of AGI by the luddites.

The Luddites were a group of working class people whose livelihoods were destroyed by the invention of the Jacquard loom. They were eventually suppressed with government violence.

If your comparison is accurate, they really should be worried!


And what definition of AGI are _you_ using?

The G in AGI stands for General. Can AlphaGo do anything other than play Go really well? If not, it is by no means general.


If you put a Go Board in front of a human who was raised by wolves in the jungle, would he play it? Or would you have to train him on how to play it? Would he not be generally intelligent if he didn't know how to play go with zero training?

As the saying goes, you can teach a kid from the jungle to play Go, but you can't teach a an AI that plays Go to live in the jungle.

Right, no disagreement, that all makes sense. (Also my favorite saying as of reading it.)

But if Mowgli takes the glowing orb with a funny voice into the jungle with a Go board because the glowing orb tells him to…


I think a good definition is causal learning with analogical mapping (generalization). This is something we expect out of rats, ravens, and dogs.

Well, AlphaZero has all the Go assumptions removed (which were not many, they put it out very quickly)

    Company with financial interest in stoking hysteria for their products continues to stoke hysteria.
It's so wild to watch a hype cycle like this in real time.

These tools are amazing, but the hype around them is even more incredible than the technology.


> *also to be clear I don’t mean to say achieving agi with gpt5 is a consensus belief within openai, but non zero people there believe it will get there.

Uh, that's burying the lede to say the least. "Non-zero" could mean one person.


Yeah, I think he's just making this up and this is his opinion. Later down he says "someone who didn't; (source, I made it up reaction image)". And I'm thinking: if you have to pointedly say that it's not made up, it's probably made up. For what it's worth, there's an interview slightly farther down in which Altman says that he does not think GPT-5 will achieve AGI, and that no GPT will, likely, because he thinks an AGI needs some degree of autonomy and "want".

No, OpenAI does not expect this, the CEO can be heard here

https://youtu.be/to78maCxFEs


Either I don't understand what AGI is or openai doesn't understand what AGI is. Because of the way gpt 3,4,5 or even 1000 works it can't be possible for AGI to be achieved. How could we say that we've got AGI when the "intelligence" is a synthesis of existing knowledge, without even ever understanding what it means?

If real AGI is to be achieved it can only be through a completely different paradigm (don't ask me which) definitely not through machine learning.

My opinion is that this whole machine learning thing will make achieving AGI even more difficult since most resources have switched their focus on that instead of trying to achieve real artificial intelligence (if that's even possible anyway).


You may have an opinion, but can you justify why you don't believe chatGPT will achieve AGI?

Existing scaling laws show that perplexity (i.e., ability to predict next tokens in text) can be lowered by increasing model size and adding more data.

Why would a model that is better than us at predicting what words would occur next in arbitrary text not be an AGI?


> Why would a model that is better than us at predicting what words would occur next in arbitrary text not be an AGI?

Because it is just predicting text at this point? Sometimes laughably poorly?

Because the appearance of intelligence may only be that?

Oh and the best one…

Have you been paying attention to how often humans royally fuck something up within inches of the finish line?

There’s a lot more to it than math.

But that’s not the same as your first question. The leading one.

The tweet said December. That’s a long ways off, relatively speaking. We will see.


Whilst I don’t think that GPT-5 will be AGI this is a very poor reasoning.

“knowing what it means” isn’t measurable and if anything lately it seems like it’s more and more a coping mechanism so we can still remain special.

There is absolutely nothing currently that can prove that human intelligence isn’t just a very large LLM or that an LLM can’t model it sufficiently.

Hallucinations might actually be the missing piece we humans are full of shit and come up with crazy explanations for things all the time.

However we do have various methods of testing these explanations which we often call theories and hypotheses both through running dedicated experiment as well as just participating in the grander social experiment which is civilization.


Thank you for this.

This is one of the things that bugs me about the AGI debate when people argue over when/if computers will match human intelligence.

Just one small problem: we don't have a working definition of human level intelligence!


"OpenAI" does not believe that it will be AGI. There is an interview farther down in the thread in which Altman states that A) GPT-5 will not be AGI and B) the GPT paradigm likely has some fundamental problems which will not get it to AGI, and that if a GPT becomes AGI it is far off.

Later down, Siqi clarifies several things:

1. That he believes AGI is a continuum, and encompasses most stuff that isn't a narrow system. So what he is ACTUALLY saying is "GPT-5 is not a narrow system, and is near the bottom of the AGI continuum".

2. That this is NOT a consensus belief within OpenAI, but rather one only shared by a "non-zero number". This should be a BIG RED WARNING SIGN, because it likely means that the number is very low and he is puffing it up/twisting words.

3. Seeing as he went on to actively deny that it was from someone making it up, that means that it's likely this is just his opinion and maybe a few of his friends.

4. He went on to answer the "agency" question by stating that one could attach a goal loop to it. So a prototype paperclip maximizer. Greeeeeeat.

And regarding ASI ... seeing the destruction and stupidity that an overgrown autocomplete can cause, maybe, just maybe, it's a good thing if we aren't on track for that right now.


I just don’t see it. 3 and now 4 just seem to be an eloquent search engine. It cannot perform any sort of actual critical thinking, just regurgitation of its models.

Step 1: Redefine the term to mean something lesser (intelligence without consciousness)

Step 2: Declare "Nailed It!"

Step 3: Profit?

If AGI is mere intelligence (modeling and reasoning, absent motivation), then whatever they call GPT-5 could achieve it. If AGI like the original meaning of the term Artificial Intelligence means a conscious mind like Asimov's R. Daneel Olivaw or even R2D2, then I don't think GPT-5 will get there.


obvious hysteria

but i note that gpt-4 training ended at the end of 2021, yet the model was only released to the public in q1 2023.

so i suspect gpt-5 is already running, and being evaluated by 'interested parties'.


Chinese room is not AGI. Maybe AGI will be achieved at some point, but not in the next few years.

Can it drive a car?

I truly believe there is a spark of intelligence in the current crops of LLMs. Probably because I wholeheartedly believe we are nothing but a bunch of neurons pieced together and there is nothing special about human.

But man, even I can't take this GPT5 will be AGI seriously. If they still use the LLM with transformer, then it will never be AGI. The transformer can't continuously learn and absorb new information during use. And it sure as hell won't stop hallucinating, imo until it can interact with the actual meatspace using some sorts of sensors and ground itself in our reality somehow.

The LLMs can at best be a very knowledgeable man with severe mental issues, probably isolated in a sensory deprivation room for too long. He still can be useful, but don't expect too much from a madman.


Any computer model that isn't hooked up to outworld sensors that enable human-like senses (touch, smell, vision, etc.) is not and cannot be anything even remotely close to AGI.

Pure text models will never have innate understanding of the physical world on a level that rivals human intelligence.


Legal | privacy