Ok, so now we just have to define "AGI" then. A robot, which knows its physical capabilities, which can see the world around it through a frustrum and identifies objects by position, velocity, rotation, which understands the passage of time and can predict future positions for example, which can take text input and translate that into a list of steps it needs to execute, which is functionally equivalent to an Amazon warehouse employee, we are saying is not AGI.
An Amazon warehouse worker isn’t a human, an Amazon warehouse worker is a human engaged in an activity that utilises a tiny portion of what that human is capable of.
A Roomba is not AGI because it can do what a cleaner does.
“Artificial general intelligence (AGI) is the ability of an intelligent agent to understand or learn any intellectual task that a human being can.”
I think the key word in that quote is "any" intellectual task. I don't think we are far from solving all of the mobility and vision-related tasks.
I am more concerned though if the definition includes things like philosophy and emotion. These things can be quantified, like for example with AI that plays poker and can calculate the aggressiveness (range of potential hands) of the humans at the table rather than just the pure isolated strength of their hand. But it seems like a very hard thing to generally quantify, and as a result a hard thing to measure and program for.
It sounds like different people will just have different definitions of AGI, which is different from "can this thing do the task i need it to do (for profit, for fun, etc)"
I think you're on to something very practical here.
Chat GPT allows for conversation that is pretty remarquable today. It hasn't learned the way us humans have - so what?
I think a few more iterations may lead to something very, very useful to us humans. Most humans may just as well say Chat GPT version X is Artificial, and Generelly Intelligent.
reply