As long as we're criticizing the analogies in the discussion (rather than the actual arguments) I'd say the hours spent do not have a consistent quality vis-a-vis solving hard problems. Because there are more absolute hours available does not mean that there are more hours available for solving hard AI problems. There are very likely less. And there has been virtually NO progress on the hard AI front.
> On the high level, there is no "chess AI", "go AI", "image classification AI" and "dexterous manipulation AI". These are all sides of the same coin, that gets significantly better every year.
On a practical level, this is not true. There are different algorithms, different architectures, different hyperparameters required for each of these problems, and often for each subdomain within each of these problems, and often for each specific instance of these problems. It's difficult to draw any kind of holistic picture that combines all of the individual advances in each of these problem instances; that's why progress in AI is so hard to measure, and why a statement like "each of these toy problems...brings us closer and closer to solving the 'real problems'" is probably a bit too coarse-grained to be fair as well.
And yet no serious research is being put into hard AI. Just like five years ago. Or ten. Or twenty.
My childhood passed in reading circa-computer books of 60s, 70s, 80s, where full, general AI seemed to be just around the corner. Obviously this problem is far harder than it seemed to be.
But the (almost universal) lack of trying is extremely disheartening.
It’s very difficult to come up with an objective metric or benchmark for general AI that can’t be gamed, or which won’t turn out to be disappointingly easy. It makes sense that most research would be in the direction of tasks which are easily quantifiable. More hazy benchmarks like the Turing test are possibly better but that one in particular isn’t so good unless it’s enhanced (I’d say a 1 hour conversation with an AI posing as someone with graduate-student level understanding of a field of science or art I know something about would be adequate proof, but maybe I’m being naive).
there are some surprisingly weak arguments in the text. It's correct to not treat computational resources as constant, but ot treat them as unimportant or negligible is awful as well.
Already computational resources are becoming prohibitive with only a few institutions producing state of the art models at high financial cost. If the goal is AGI this might get exponentially worse. Intelligence needs to take resource consumption into account. The models we produce aren't even close to high level reasoning and we're already consuming significantly more energy than humans or animals, something is wrong.
The scale argument isn't great either because deep learning is running into the inverse issue of classical AI. Now instead of having to program all logic explicitly we have to formulate every individual problem as training data. This doesn't scale either. If an AI gets attacked by a wild animal the solution can't be to first produce 10k pictures of mauled victims, intelligence includes to reason about things in the abscence of data. We can't have autonomous cars constantly running into things until we provide huge amounts of data for every problem, this does not scale either.
As someone who has been into AI since the mid-90s, I continue to be deeply disappointed with the overall rate of progress, this includes with all of the machine-learning and stats based recasting of what's meant by AI.
The big fundamental problems are all the same, and we don't appear to have much progress on them. We have figured out very clever ways of teaching extremely useful new tricks to computers. I don't mean to denigrate the value of that work, it just resembles regular programming, but by different means.
Whatever your definition of intelligence, there's little to no generalizability of what's happening right now.
Interesting to see the amount of "winters" AI has gone through (analogous, to a lesser extent, to VR).
I see increasing compute power, an increased learning set (the internet, etc), and increasingly refined algorithms all pouring into making the stuff we had decades ago more accurate and faster. But we still have nothing at all like human intelligence. We can solve little sub-problems pretty well though.
I theorize that we are solving problems slightly the wrong way. For example, we often focus on totally abstract input like a set of pixels, but in reality our brains have a more gestalt / semantic approach that handles higher-level concepts rather than series of very small inputs (although we do preprocess those inputs, i.e. rays of light, to produce higher level concepts). In other words, we try to map input to output at too granular of a level.
I wonder though if there will be a radical rethinking of AI algorithms at some point? I tend to always be of the view that "X is a solved problem / no room for improvement in X" is BS, no matter how many people have refined a field over any period of time. That might be "naive" with regards to AI, but history has often shown that impossible is not a fact, just a challenge. :)
I disagree strongly with this view that hard problems need to be "conceptually solved" in a vacuum first. That experimentation and resources don't matter.
Computing increases have enabled a ton more experimentation with AI. Experiments that would have taken years to do, can be done in hours now. Experimentation lets scientists develop intuition about their algorithms. And get a sense of what will and won't work. And have a better mental model of the problem.
Not to mention the benefit of having 10x or more funding and researchers working on it now that results have been shown.
It's a common myth that there has been little or no innovation in NNs and that it's just a matter of computing power. Taking the best algorithms of the 90s and running them on modern hardware would still give you poor results. For instance, most of the 90s research was on very shallow nets, they didn't know how to properly train deep ones.
> "any "AI" is better than none no matter how poor it is"
You don't have to disagree. The OP said:
> "it doesn't really matter all that much how good the AI is"
I think you're both correct and you bring up an interesting point. As long as your AI is "good enough" to replace what would've taken more resources to do otherwise, it's a win. I'm not sure if that's "half" as you stated, but I bet it depends on the task. If the task is saving a few seconds to query something, then I'd agree, half of the time wrong isn't a savings. But, if you have less than half a chance at saving thousands of dollars or hundreds of hours if it works correctly, then that may be chalked up as a win.
Even if it is right, it is still predicated on a massive "critical mass" of knowledge and insight about reality that such an AI would have to spend a long time accumulating. It also doesn't propose solutions to performance scaling limitations.
Fair point. However one could argue that symbolic AI is a harder problem, perhaps awaiting a theoretical breakthrough. And that the pace of progress in this area has slowed at least in part due to ML hype sucking all the air out of the room (along with the best talent).
My point is that there IS no serious "pure" AI research these days. Your image of lots of pure AI researchers wasting their time is a fantasy-- those people don't exist. People work on applications.
reply