I'm talking about the efforts to really reproduce human-like intelligence. I'm not saying that AI isn't a field, I am saying if you want human-like intelligence, the AGI people are the most far along, or at least the most serious about it.
Did you really look into AGI, for example the past conferences or those projects, and conclude that it is just invaluable holistic mumbo-jumbo?
That is so unfair and inaccurate, I can't see how you can possibly be evaluating things rationally if you really came to that conclusion.
No. Stop reading reddit.com/r/futurology or that _awful_ article by waitbutwhy.
Sure its a possibility but we're still making baby steps and tiny tools, pastiches of intelligence as opposed to genuine intelligence or conscious.
People who ask questions such as this often don't consider that it remains eminently possible that AGI is an impossibility for us to build. Also remember that anything an AI can do in the future a human + an AI can probably do better. Right now at least they're just tools we use and will remain so for the foreseeable future.
If AGI exists, it isn't on par with human level yet. No AI has reached the human level of intelligence yet. I think that is the argument that was made on it.
Many people define AGI as human level intelligence, and we're supposed to get there by 2020 or 2030 or so on when computers reach faster speeds.
What they are talking about is the idea of human-like general intelligence, and AI mostly doesn't try to do that anymore, although there are some people who are seriously trying and calling it AI and even a few who are sort of aware of what the AGI people are doing or have projects that are as sophisticated. But most of the researchers who are farthest along and most serious about it have been calling it AGI.
Anyway, you have to at least include AGI if you are serious about human-like AIs.
AGI specifically refers to general AI with human-level intelligence or above. It's a far cry from modern AI, but I'm personally quite optimistic it'll be achieved within my lifetime :)
I think all these discussions are pointless, since we lack any rigorous definition of intelligence or "understanding." Going for something "human-like" might be worthwhile simply for the fact that we associate intelligence with humans and thus would more easily accept it in something that mimics us, despite never being able to prove that it is actually intelligent. In the same way, going for some technology that is more rationally explainable might cause humans to doubt its intelligence, because we still don't understand human intelligence and thus people would conclude such a thing can't have human-level intelligence. But that has nothing to do with the most reasonable path towards AGI. Almost all the approaches I've seen listed so far can't be accepted or dismissed with today's state of research - we can only pursue as many of them as possible and hope that the path clears up eventually.
It's impossible to create AGI until we understand the human intelligence completely. At least to know its parameters, to be able to make AI stronger. So far we are nowhere close. Probably, 100 years required to reach that level of understanding.
AGI doesn't mean animal-like intelligence. AI is necessarily being designed as an extension of us, and not an independent will that has its own motivations, because that is the only way it is useful.
Evolution took billions of years to produce us, with our self-interest pursuing intelligence. An errant experiment is not going to create a humanity-leapfrogging animal-like intelligence, and humanity will not give animal-like artificial intelligences millions of generations to evolve.
This is so vague as in borderline sarcasm. Okk, it’s AGI but really, what is it? Reinforcement learning? Combining symbolic AI with modern advances? Deep learning theory? AGI is a vacuous term. And no one knows which path would lead to something similar to human intelligence. That itself is a matter of research.
We don't even know if AGI is possible. Let's not mince words here: nothing, and I do mean nothing, not a single, solitary model on offer by anyone, anywhere, right now has a prayer of becoming AGI. That's just... not how it's going to be done. What we have right now are fantastically powerful, interesting, and neato to play with pattern recognition programs, that can take their understanding of patterns, and reproduce more of them given prompts. That's it. That is not general intelligence of any sort, it's not even really creative, it's analogous to creative output but it isn't outputting to say something, it's simply taking samples of all the things it's seen previously and making something new with as few "errors" as possible, whatever that means in context. This is not intelligence of any sort, period, paragraph.
I don't know to what extent OpenAI and their compatriots are actually trying to bring forth artificial life (or at least, consciousness) forward, versus how much they're just banking on how cool AI is as a term in order to funnel even more money to themselves chasing the pipe dream of building it, and at this point, I don't care. Even the products they have made do not hold a candle to the things they claim to be trying to make, but they're more than happy to talk them up like there's a chance. And I suppose there is a chance, but I really struggle to see ChatGPT turning into skynet.
Then you should read Rodney Brooks for an argument against AGI coming soon with no mention of human exceptionalism. In fact, he argues that an artificially created organism with intelligence is more likely soon than an engineered one.
>> there is a DAMN surprising level of intelligence in significantly less complex life. we are just so attached to intelligence as defined by human culture to call it as it is.
I never said experts in human sciences all agreed on the subject of AI cognition - and even if they would, they are only experts in half of the subject. I, as a Computer Scientist, am also only an expert in a fraction of the subject.
My point was exactly that people discussing this here don't have the whole picture - especially the article's author, whose credentials I strongly question.
The articles you link actually illustrate my point. Both are brilliantly written to the point they surpass my ability to judge their technical accuracy - but are very narrowly focused on the computing aspect. Zero AI models available today are capable of solving actual problems they haven't been programmed to, even less so to propose new problems. We'd need way more than a few technological leaps to get there, which is why I think we are way further from AGI than the author believes.
I don't dismiss the possibility though, I believe we will eventually get there, but not nearly as quickly, and dramatically, as people here seem to believe.
Totally agree with your first point, I just didn't want to have too many caveats and nitpicking words. If it's not clear, then of course my arugment in no way implies that human intelligence was "created" by an intelligence - it evolved. Poor wording aside, my statement remains the same.
"This is hugely debatable. Why is AGI inevitable? Even given great amounts of computing resources, a artificial general intelligence does not just automatically appear [...]"
Well no one thinks AGI will appear without anyone working on it, but lots and lots of people are working on it now. And since there are huge incentives to create one, the belief is that more people will work on it as time goes on.
"[...] there really isn't any evidence that I know of that a general intelligence is any closer than it was 20 years ago."
Well, in some sense I agree, in that we still have no idea how far off AGI is. If it's going to happen in 10 years, we should definitely prepare now. If it's 500 years away, maybe it's too early to think about it. But since neither of us knows, wouldn't you say it's worth putting some effort to working towards safety?
In another sense though, I disagree with you that we're not any closer to AGI. As you said jsut the sentence before, fields like comptuer vision have advanced tremendously. While this doesn't necessarily mean AGI is closer, it certainly seems that the fields are related, so advancement in one is a sign that advancement in the other is closer.
Another article reinforcing my belief that framing AGI research as a quest for "human-level" AI is misguided. The type of intelligence we are looking for is actually animal intelligence. The higher level human abilities are mainly just higher levels of characteristics that most animals have.
Most of the things I read about AGI is, in my opinion, highly speculative and low value.
Human Intelligence is not General Intelligence. Trying to mimic Human Intelligence might be a good idea because we can use ourselves as a benchmark, but it might also be a bad idea because the process followed by biological evolution might not be the shortest or optimal path to intelligence.
I don't think AGI has ever moved in its goalposts, defined rather well by the Turing test. AGI must be general, capable of reasoning at least as well as a human in any domain of inquiry, at the same time. Showing more than human reasoning in certain domains is trivial, and has been happening since at least Babbage's difference engine.
However, while AI has been overtaking human reasoning on many specific problems, we are still very far from any kind of general intelligence that could conduct itself in the world or in open-ended conversation with anything approaching human (or basically any multicellular organism) intelligence.
Furthermore, it remains obvious that even our best specific models require vastly more training (number of examples + time) and energy than a human or animal to reach similar performance, wherever comparable. This may be due to the 'hidden' learning that has been happening in the millions of years of evolution that are encoded in any living being today, but it may also be that we are missing some fundamental advancements in the act of learning itself
Fair enough, though I feel you are a bit too eager to push back against ideas that go counter to your initial thoughts. Of course, because I hold differing opinions, you could reasonably object that it is just what I would say!
I have a different idea of what AGI means: in my view, it is a retronym created in the 1980s in order to refer to AI of the sort Turing envisioned (which was more or less "what humans do") and differentiate it from things that were then being called AI, such as IBM's Deep Blue, which were mostly brute force applied to conceptually narrow problems.
You mentioned Roitblat's framework, and I would draw your attention to one aspect of it: it is not just a list of things that humans do, but those things which humans do considerably better than other animals, yet for all of them, there are other species that do them to some extent. As an evolutionist, I suppose there was a relatively recent time in the past when some of our ancestors or sibling species (all now extinct) had some or all of these skills to some intermediate level. In this view, intelligence is not an all-or-nothing concept, and achieving some of it is still progress.
Here's a view which you may not have seen: the pace of progress in AGI has not been constrained by an inability to define what we want, but by the pace at which we see ways to make what we see we need. For example, it is clear that current LLMs have a problem with truth, but it is not clear from what has been made public so far that anyone has a solution. Some people think that what's being done now with LLMs, but more of it, will be enough to get us to what will be generally accepted as AGI; I am skeptical, but I am willing to be persuaded otherwise if the evidence warrants it.
Only a minuscule fraction of people who work with/in AI are doing general artificial intelligence. The rest is really more about making sense of the vast and growing amounts of data our world produces. The non-human intelligences that benefit from this are corporations, not artificial minds. It's actually sad that AGI research is still a fringe idea.
Did you really look into AGI, for example the past conferences or those projects, and conclude that it is just invaluable holistic mumbo-jumbo?
That is so unfair and inaccurate, I can't see how you can possibly be evaluating things rationally if you really came to that conclusion.
reply