That's the last step in a training process that took a half-billion years, if we count from the first neurons (this is not, of course, an argument against the feasibility of strong AI.)
It takes years of training before humans can demonstrate human level intelligence. Unless the first strong AI is super human it's not going to look like a strong AI for several years.
Well, to be fair, the AI probably tried somewhere over several millions of times before coming to whatever conclusion it reached in its training process
If I had a human-level-AI capable machine delivered today to my basement, then it'd still take something like 5 years of training to bring it to a human-level intelligence.
Babies are born with all the hardware to have an adult-level intelligence, but converting that potential to actual skills requires months/years for each skill of active learning, experimentation and feedback, not only reading information from the web.
AI progress has strongly sped up in the last years. We have come a long way since the simple digit recognizers. And there is no end in sight, no AI winter. There are no diminishing returns in intelligence. Strong AI is the last invention.
This is a great comment and what holds a lot of today's AI back is indeed complexity. The only question I would add is whether it's still science (hypothesize, predict and test as stated) if it's not a human performing the expected steps - e.g. is training a neutral network through feedback iterating through these steps (with short cycle times?)
One of the more useful heuristics I've heard is that when humans make split second decisions (like recognizing a face in an image or choosing the next word to say) there's only time for a few - like five to seven - layers of neurons to fire between impulse and action. These are the sort of tasks that neural networks are currently making great progress on.
But things like long-term planning and goal formation aren't in this quick-reflex domain. (Though humans are arguably pretty bad at these sorts of activity too, on average.)
We're also generally training neural nets to succeed at well-defined, bounded tasks, which don't really require long term planning so much. The video-game playing work at Deep Mind is actually a pretty important step forward, though; they're doing impressive stuff with domain-transfer (learning multiple games with a single network), and as one goes further with games, we might start seeing longer-term planning and concept formation. (eg, if we start seeing awesome StarCraft players...)
There's a case to be made, though, that we humans just trained on a big, broad objective function - survive, thrive, and multiply - with training applied across millions of years. And a good deal of intense local training over the first ten years of life, using a massively parallel computer that far outstrips any data center.
The question for strong AI seems to rest heavily on how much extraneous work there is done in biology, and whether we end up with a system so complex that it requires eighteen years of training time. If creating a single strong AI actually requires that much effort, we don't have to worry too much about skynet, methinks.
But we certainly don't have to worry about them spontaneously arising when we train with simple objective functions.
I think of this as the "30-years-away" problem. Strong AI has been 30 years away for some 70 years now. It's just that this time, Kurzweil, Bostrom, et. al. are saying "no, really, it is this time".
It’s still exponential, but a little slower. (edit: wait, is that still exponential if it slows down?) Anyway we only need to get to human level (or maybe a bit less) and we’re not that far off (maybe 10 or 20 years at current rates of progress?)
Not all types of AI need external training data, you can train on how effectively a goal is achieved
It is but we've also had decades of practice. What scares the most about AI isn't how advanced computers can become but how slow we are to learn in comparison.
How do you figure regarding hardly any training? Humans are constantly training on a never ending stream of sensory information from the time their brains form in the womb, not to mention whatever subconscious and conscious processes are reconciling that data with memory, or whatever training has been built into our minds over eons of evolution.
An 18 year old will have been training for ~160,000 hours on a volume of raw data that is probably far beyond our ability to currently store let alone train an AI with.
As far as training for a specific task, all that training on other matters kicks in to help the human learn or accomplish a “novel” task more rapidly, for example, knowing how to read and interpret the instructions for that task, knowing how to move your appendages and the expected consequences of your physical interactions with a material object. You’re certainly not taking a fetus with a blank slate and getting it to accomplish much at all.
It's like you're seeing into a machine's imagination. Look at the 8th-layer images of the pitcher, or the gorillas with improved prior, for instance. They're very close to layout sketches an artist might use to block out a painting or photograph before beginning the work.
Unreal. Strong AI is not as far off as we think. 15 years. Maybe 20.
I think the current state of deep neural networking designs and research funding pouring into generalizing the simple neural nets we're working with now suggests that we are, in fact, pulling them out of a hat.
Right now we're just discarding all the ones that are defective, at a stupendously high rate as we train neural nets.
I can't speak to what method would generate the first strong AI, but I suspect the overall process - if not the details - will be similar. Training, discarding, training, discarding, testing, and so on. And the first truly strong AI will likely just be the first random assemblage of parts that passes those tests.
reply