Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

the posts keep shifting yes. the new definition of agi is much closer to super intelligence. However, depending on how close to experts the model is, there's room for more.

If the model is basically on par with experts then it's still human fallible. But...suppose a general intelligence that is the level at each task as the chess engines of today is at chess.

Basically the, "Oh you thought that was a mistake ?, no you just didn't understand the program" level of intelligence even for the smartest of humans.



sort by: page size:

> An AGI is an AI that can do everything a human can do, period

GI in AGI stands for general intelligence. If what you said is your benchmark for general intelligence then humans who cannot perform all these tasks to the standard of being hirable are not generally intelligent.

What you're asking for would already be bordering on ASI, artificial superintelligence.


At least recognize that the definition of AGI is moving from the previous goalpost of "passable human-level intelligence" to "superhuman at all things at once".

Well first you say,

>I don't believe LLM's will ever become AGI, partly because I don't believe that training on the outputs of human intelligence (i.e. human-written text) will ever produce something equivalent to human intelligence.

Now you say

>I'm using the definition provided verbatim in the linked article: "We believe superintelligence—AI vastly smarter than humans—could be developed within the next ten years."

AGI (Artificial General Intelligence) is Super Intelligence. I've never seen posts moved so fast in my life. So are you not a General Intelligence then ?

This is the problem with these discussions. Everyone so sure of something they can't even properly articulate.

I'm asking you what a Language Model needs to do to be considered AGI and it needs to be something every human can do, else it's not a test of general intelligence.


I noticed that some people expect AGI to have some superhuman intelligence instead of just intelligence similar to that of humans. I have seen some posts arguing that an AI can only be called AGI if it is better than any human being on every task.

It's nice to see that this article doesn't fall into that.


AGI specifically refers to general AI with human-level intelligence or above. It's a far cry from modern AI, but I'm personally quite optimistic it'll be achieved within my lifetime :)

It is but it also shows current limitation of the reasoning skills of this models.

A super human AGI would easily be able to use tools like Wolfram Alpha and also derive most of it on the fly or from memory just by thinking about it.

If you set your expectations to AGI (right now) you will be disappointed but that doesn't mean it's not immensely useful.

Unfounded hype and real technological progress seem to go hand in hand.


I think you're talking about ASI (artificial superintelligence). AGI just means that it's a general problem solver as opposed to a narrow AI. It doesn't have to be superhuman.

Replace "AGI" with the idea of an AI that is able to perform tasks in a way that compares to a regular human being. That's what people are talking about, we're not there yet, and I think all the semantics over AI/AGI/"General Intelligence" is just muddying the waters.

There's people like you, saying ChatGPT is AGI, while to me it's clearly not, and this is simply because we're using different definitions of "general intelligence". If you really generalize intelligence, ChatGPT fails miserably compared to a human, if you narrow intelligence to certain specific tasks, ChatGPT performs as well, or better, than your average human.


If AGI exists, it isn't on par with human level yet. No AI has reached the human level of intelligence yet. I think that is the argument that was made on it.

Many people define AGI as human level intelligence, and we're supposed to get there by 2020 or 2030 or so on when computers reach faster speeds.


"The question of whether machines can think is about as relevant as the question of whether submarines can swim." Edsger Dijkstra

I suspect you're right. I believe there's nothing that can be formally defined that is impossible for an AI to do and possible for a person to do. Furthermore, AGI is not formally defined or defined tightly enough to be a target we can actually hit. It's a hand wavy way of saying "AI as smart as a person". The "Anti AGI" crowd moves the goalposts every time a huge breakthrough occurs but as long as there is a goalpost (formal definition) to hit AI will surely hit it. The "Pro AGI" crowd is also guilty of not being precise with exactly what AGI is.

I also fundamentally believe the whole concept of AGI is flawed and biased to what people perceive as intelligence rather than intelligence itself. This is partially why there is so much effort and hoopla around things like GPT-3, (or in the past the Turing test). These programs which demonstrate something like human intelligence which is difficult to nail down in terms of a formal definition of ability. Both groups point at it and claim victory or point at a flaw. AI progresses inexorably regardless of what the hell AGI even means.


I think you are being 'unfairly' downvoted because AGI is very commonly used on here, or at least it comes up a lot.

But yes, AGI is the holy grail of AI research. Currently all AI successes have been training computers to be really good at one thing, chess, go, driving etc... And in many of these domains AI has outperformed humans by a large margin.

But take those AI systems and ask them to do something else outside of their specialty and their knowledge doesn't transfer. In essence they are really just optimized functions for a very specific set of inputs.

AGI would be an AI that increases in intelligence across multiple domains and can respond to novel problems (like a human)

note: you could argue the real definition of AI is actually AGI but that is a different discussion :)


The author, confusingly, is using the term AGI to refer to human intelligence.

Main points from the article:

"Artificial general intelligence is something we have plenty of here on Earth, most of it goes to waste, so I'm not sure designing AGI based on a human model would help us much."

"Superhuman artificial general intelligence is not something that we can define, since nobody has come up with a comprehensive definition of intelligence that is self-sufficient, rather than requiring real world trial and error."

"Superhuman artificial general intelligence is not something we can test, since we can't gather statistically valid training datasets for complex problem and we can't afford to test via trial and error in the real world."

"Even if superhuman artificial intelligence was somehow created, there's no way of knowing that they'd be of much use to us. It may be that intelligence is not the biggest bottleneck to our current problems, but rather time and resources."


Yes you'll find that any testable definition of AGI that has not been passed yet would be unpassable for a big chunk of the human population.

In other words, General, Artificial and Intelligent have been passed. That's why a few papers/researchers opt to call these models "General Artificial Intelligence" instead

https://jamanetwork.com/journals/jama/article-abstract/28064...

https://arxiv.org/abs/2303.12003

Or some such variant like "General Purpose Technologies" as Open AI did.

https://arxiv.org/abs/2303.10130

since "AGI" has so much baggage with posts shifting at the speed of light.


When people talk about human-level AGI, they are not referring to an AI that could pass as a human to most people - that is, they're not simply referring to a program that can pass the Turing test.

They are referring to an AI that can use reasoning, deduction, logic, and abstraction like the smartest humans can, to discover, prove, and create novel things in every realm that humans can: math, physics, chemistry, biology, engineering, art, sociology, etc.


You are mistaking an AGI for an artificial superintelligence (I might also add that the very concept of superintelligence is pure speculation - basic AGI at least can be grounded in replication of human brains). The first AGI will be closer to a low-IQ human than a Machiavellian super-optimizer.

> AGI means humans are no longer the smartest entities on the planet.

Superintelligence and AGI are not the same thing. An AI as smart as an average 5 year old human is still an Artificial General Intelligence.


One of the many ways AGI is often defined is “human-level intelligence,” so that seems like a tautological impossibility.

No it's definitely moving. Somehow general intelligence has morphed into essentially super intelligence where the AI is expected to outperform every human expert at every task before people will call it AGI. Which is just ridiculous. The bar for AGI has been set so high by some that a significant percentage of the human population would fail. That's when you know things are out of hand.

This isn't the only erroneous bar either. Somehow, the synonymity to human intelligence is taking very weird importance. We have people inventing their imaginary/magical definitions of reasoning and understanding (that they can't test for) just so LLMs won't qualify.

GPT-4 is absolutely a general intelligence.


I think AGI is a questionable concept. We still don't have a good definition of what intelligence really is, and some people keep moving the goal posts. What we need is AI that fills specific needs we have.
next

Legal | privacy