We are a very long way from even having 0.0001% of the compute required to produce a weak AGI.
like everything, it will take much more resources and time than we can predict today with our best estimates, partly because that's just how these things turn out, and mostly because a true AGI will likely require billions of neural networks all adapting and swapping neurons and communications pathways amongst themselves and training themselves at the same time they are doing useful work.
we have no idea how to do anything like this at scale, yet.
I had the same feeling while studying computational neuroscience.
There is no way in hell the current approaches is going to lead to AGI anytime soon ! The models we have for even the most simplest operations of the brain is not well though out. Even using the term "Artifical Nueral Nets" is an insult to the real machinary that is a neuron.
However, I do not think someone in some basement is going to figure out the solution. I think the people who are working to solve the problem are the army of nameless minimum wage graduate and PhD students across the industrialized world who are studying all the various topics involving hardware, software, biology, etc. All of which are required to create a a true AGI system.
We are no where near creating Strong A.I. but its going to come incrementally.
Hmm, I don't think the fact that we have enough processing power (which I think we already do) precludes the possibility that we simply don't know how to use it.
AGI isn't going to be a simulation of a human brain, it's going to be a model that learns to emulate human behavior on its own. Simulating the brain is not remotely feasible, at least not until we have hyperintelligent AGIs to help us design the chips to do it with :P
I am convinced that we have the hardware already for AGI. Can an exaFLOPS scale Google datacenter with millions of TPUs really be less capable than the 20 watt ball of jelly we all carry around?
The barriers to AGI are in the algorithms. Nobody knows how long it will take to achieve unsupervised learning that is as general as the human brain. It might take 200 years of slow but steady progress or alternatively somebody could publish a breakthrough in a paper tomorrow and we could have parity by the end of the year.
IMHO there are two distinct paths towards AGI. One is understanding intelligence - I believe that if we "just" knew how to make the "brain software" (and we're really far from that), our current compute power could support many human-or-higher-power AIs at a relatively cheap price. And the second is pure raw brute force, either through simulation-without-understanding of brains, or perhaps it will turn out that sufficient intelligence may be reachable simply through a sufficiently large critical mass of e.g. simulated cortical columns. We're also really far from that, however, one aspect is that the latter approach is progressing even if there's literally zero progress in understanding AI; as long as we can sustain an exponential increase of FLOPS/$, we will eventually reach it.
Ya that's kind of an interesting question. With Moore's law starting to approach it's physical limitations it would seem AGI wouldn't be feasible with current algorithms.
Not to mentions as your describing.. those "millions" of layers of narrow AI's sounds like impossible amount of work to do..
I don't think very many machine learning experts really believe that those techniques will lead to AGI.
But we actually understand so little about how the brain works, let alone how intelligence emerges from it.
What I am saying is that AGI may be impossible, but people are so determined that it's just around the corner with enough hardware and clever enough software.
It's just around the corner, and we don't even know whether it is possible.
AGI is very far away from being a reality. I am skeptical that classical computers will ever achieve it. We will need to build radically different machines to accomplish AGI.
This seems like a very difficult statement to support
Notice, I am not saying it is not plausible or realistic, I think it is. I also think that there is a fairly short time horizon (<100 years) based on the state of computing currently.
That doesn't mean we know what the path is though. So will it come through scaled ANN? Maybe WBE? Will it be an emergent property from all of the routers in the world exchanging state information? No one knows.
Furthermore, the past decade has seen multiple machine learning triumphs that many AI researchers thought were 25+ years away: self-driving cars, machine translation, high accuracy speech recognition, visual image content extraction.
How many times does the professional AI community have to repeat this?: Narrow AI projects do not necessarily have trajectories toward AGI. Yann Lecun JUST REITERATED THIS again last week. Seriously, how many times does it have to be said for people to understand it?
Yes there is progress in machine learning, but those say almost nothing about Artificial General Intelligence which is magnitudes of difference.
So again, there is no PATH TO AGI. No one can sketch what approach, if any will get us there a priori because there is so much we don't know about intelligence generally and all of the subsets of problems within it.
I don't think AGI is likely, I think it is inevitable.
We can make specialized neural networks that can do specific tasks quite well. There's nothing stopping us from chaining those together. We have the pieces to make neural networks that can train on new data, thus creating new layers atop previous networks. We can even train those layers based on the data generated by the action of the network itself. The pieces seem to be present, the tooling around putting them together seems to be lacking for the time being.
I expect to see AGI in my lifetime, artificial super intelligence shortly thereafter and then the event horizon of the singularity.
It's impossible to create AGI until we understand the human intelligence completely. At least to know its parameters, to be able to make AI stronger. So far we are nowhere close. Probably, 100 years required to reach that level of understanding.
I think the most accurate answer is that we just don't know. Since we really don't know how an AGI could work, we have no idea which of the advances we've made are getting us closer, if at all. Is it just an issue of faster GPUs? Is the work done on deep learning advancing us? I don't think we'll know until we actually reach AGI, and can see in hindsight what was important, and what was a dead end.
I do take exception to some of the specific statements you make though, which make it sound like the only real progress has been on the hardware side. There's been plenty of research done, and lots of small and even large advances (from figuring out which error functions work well ala Relu, all the way to GANs which were invented a few years ago and show amazing results). Also, the idea that "just applied statistics" won't get us to AGI is IMO strongly mistaken, especially if you consider all the work done in ML so far to be "just" applied statistics. I'm not sure why conceptually that wouldn't be enough.
Current technology doesn't look very promising unless we somehow come up with a "computer" architecture that is similarly scalable and energy efficient as a brain. Machine learning and deep learning aren't exactly new. The big change that made them possible is the availability of faster hardware. If transistor density increases stop before we can reach AGI or even just a dumbed down version of it, then we might never reach AGI ever.
This doesn't mention AGI, which seems to be the prerequisite to this being a possibility. Despite impressive advances in "weak" ai, strong ai is not a simple extension of weak ai, and it's hard to tell if it will arrive within our lifetime.
No one can explain why AGI is impossible because you can't prove a negative. But so far there is still no clear path to a solution. We can't be confident that we're on the right track towards human-level intelligence until we can build something roughly equivalent to, let's say, a reptile brain (or pick some other similar target if you prefer).
If you have an idea for a technical approach then go ahead and build it. See what happens.
I think there are fundamental questions still unanswered that could put the brakes on the whole thing.
For example, I would put forth that for a given problem, there is a lower limit to how simple you can make divided pieces of that problem. You can't compute the trajectory of a thrown baseball using only a single transistor. Granted most problems can be divided into incredibly simple steps. The question we face is "is AGI reductive enough for human beings to create it?" Is the minimum work-unit for planning and constructing an AGI small enough to be understood by a human?
That's of course putting aside the problem of scale. The neocortex alone has 160 trillion synapses, each of which exhibits behavior far more complex than a single transistor. You could argue that for many commercially-viable tasks we've found much better ways than nature, and that's true, but AGI is a different game entirely. Our current AI methodologies may be as unrelated to AGI as a spear is to a nuclear missile despite them both performing the same basic function.
Personally I don't think we will succeed building a true AGI based on the current computer architecture. The computing units of the only intelligent beings we know of so far - us and maybe some animals to an extent - work very different from how a CPU works. I suspect we will need to build a computer that more closely resembles a neuron and the way it performs calculations...
I never ever said that the tiny approximation of subsets of our brain was enough for AGI. Just because we haven't found out the exact structures of the neural net in our brain and how to emulate it, it's still very much a neural net. It's just bigger and more complex than anything we can make our emulate yet.
like everything, it will take much more resources and time than we can predict today with our best estimates, partly because that's just how these things turn out, and mostly because a true AGI will likely require billions of neural networks all adapting and swapping neurons and communications pathways amongst themselves and training themselves at the same time they are doing useful work.
we have no idea how to do anything like this at scale, yet.
reply