Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

An AI "winter" is a long period in which [edit: funding is cut because...] researchers are in disbelief about having a path to real intelligence. I think that is not the case at this time, because we have (or approaching) adequate tools to rationally dismiss that disbelief. The current AI "spring" has brought back the belief that connectionism may by the ultimate tool to explain the human brain. I mean you can't deny that DL models of vision look eerily like the early stages in visual processing in the brain (which is a very large part of it). Even if DL researchers lose their path in search for "true AI", the neuroscientists can keep probing the blueprint to find new clues to its intelligence. Even AI companies are starting to create plausible models that link to biology. So at this time, it's unlikely that progress will be abandoned any time soon.

E.g. https://arxiv.org/abs/1610.00161 https://arxiv.org/abs/1706.04698 https://www.ncbi.nlm.nih.gov/pubmed/28095195



sort by: page size:

"AI winter" is a pretty official term -- it even has a Wikipedia entry. I actually don't think neural networks are just hype, any more than previous AI technology was just hype. A lot of great things have come out of AI research -- logic programming, expert systems, even genetic algorithms -- but each in turn failed to deliver on the unrealistic expectations that were built up around it. We see pictures of ANNs hallucinating, and they seem to promise machines that can dream. But in reality an ANN is just a statistical model trained on a large corpus of inputs belonging to a particular domain. They're profoundly effective at what they're good at, and it's cool to see them pick apart a picture and tell you what's in it.

However, I've been hearing crazy talk, especially in the popular press, about how strong AI is just around the corner, how AIs will soon replace people, what will we all do when machines can do everything for us, and so on. It's been repeated on a ten to fifteen year interval since the dawn of the computer age, it's no more true than it was the last time, and we should know better by now!


Yeah, it seems pretty unlikely to me that there is an AI Winter coming, given that we now have programs that look at a photograph and say "Women wearing a hat, sitting on a bar stool and drinking wine" when 5 years ago such capabilities were unfathomable- The kind of capabilities that are currently being demonstrated have wide reaching applications and will take a decade to filter into the rest of the economy, even if you pessimistically assume that all research from now on reaches a complete standstill.

The first AI winter came after we realized that the AI of the time, the high level logic, reasoning and planning algorithms we had implemented, were useless in the face of the fuzziness of the real world. Basically we had tried to skip straight to modeling our own intellect, without bothering to first model the reptile brain that supplies it with a model of the world on which to operate. Being able to make a plan to ferry a wolf, sheep and cabbage across the river in a tiny boat without any of them getting eaten doesn't help much if you're unable to tell apart a wolf, sheep and cabbage, let alone steer a boat.

That's what makes me excited about our recent advances in ML. Finally, we are getting around to modeling the lower levels of our cognitive system, the fuzzy pattern recognition part that supplies our consciousness with something recognizable to reason about, and gives us learned skills to perform in the world.

We still don't know how to wire all that up. Maybe a single ML model can achieve AGI if it is adaptable enough in its architecture. Maybe a group of specialized ML models need to make up subsystems for a centralized AGI ML-model (like a human's visual and language centers). Maybe we need several middle layers to aggregate and coordinate the submodules before they hook into the central unit. Maybe we can even use the logic, planning or expert system approach from before the AI winter for the central "consciousness" unit. Who knows?

But to me it feels like we've finally got one of the most important building blocks to work with in modern ML. Maybe it's the only one we'll need, maybe it's only a step of the way. But the fact that we have in a handful of years not managed to go from "model a corner of a reptile brain" to "model a full human brain" is no reason to call this a failure or predict another winter just yet. We've got a great new building block, and all we've really done with it so far is basically to prod it with a stick, to see what it can do on its own. Maybe figuring out the next steps toward AGI will be another winter. But the advances we've made with ML have convinced me that we'll get there eventually, and that when we do, ML will be part of it some extent. Frankly I'm super excited just to see people try.


Because when you start confusing pattern recognition and neural net training with "intelligence" and "learning" and "consciousness" then everybody who doesn't know enough about the technology will get wrong expectations, it's even worse as even people working on AI are having these wrong expectations, so yes a winter is coming. Besides that, the AI we have today is used all wrong, to spy on people even more and to consolidate central services and the big players.

You could actually make a reasonable argument for the opposite of a winter, that we are heading into an unprecedented AI boom.

The article's main argument for a winter is that deep learning is becoming played out. But this misses the once in history event of computer hardware reaching approximate parity with and overtaking the computing power of the human brain. I remember writing about that for my uni entrance exam 35 years ago and have been following things a bit since and the time is roughly now. You can make a reasonable argument the the computational equivalent of the brain is about 100 TFLOPS which was hard to access or not available in the past but you can now rent a 180 TFLOP TPU from Google for $6.50/hr. While the current algorithms may be limited there are probably going to be loads of bright people trying new stuff on the newly powerful hardware, perhaps including the authors PVM and some of that will likely get interesting results.


Why do you think there'll be an AI winter? And in what form — stagnation of neural-network based technologies, a change in the overall paradigm of learning-from-data, or something else altogether?

If there is a winter, I think it's going to be very different than in the past. What people used to call a "winter" was a drying-out of the funding for AI research. However, in the past, this research was funded primarily by public money, and specifically by defense budgets. And it was cut when scientists failed to produce the army of super robots the generals thought they were promised. In the present however, there is a lot of money put into AI research by the industry - Google, Facebook, Microsoft, Amazon and IBM, as well as many other, smaller companies.

The amount of investment in AI by those companies is simply unprecedented and so is the number of people who -drawn by this river of dosh, like moths to a flame- are pursuing AI as a career (even if that only means the statistical machine learning side of AI that those companies invest into).

What this means is that the current branch of AI has become "too big to fail". And that has nothing to do with how successful it is. As long as it can be monetised and the industry leads can show some return to their investment, "AI" will keep growing.

A winter, if it comes, will be a winter of knowledge- not of funds. We will end up with so much unusable, meaningless, laughably bad "research" that any significant contribution to knowledge will simply be buried under a ton of rubbish, never to be found out.

So the money will keep flowing in. But what will come out the other end will be utter nonsense.


Why would you think there is an AI Winter upon us?? From my subjective viewpoint, I don't feel like I've seen any evidence of that. If anything, AI (in a general sense) seems to be hotter than ever.

What evidence is there for an notion of a developing (or existent) AI Winter at the current moment?


> will make AI winter a continuing reality.

This will be a good thing. Theoretical development thrived under the AI “winter”.

I am going to throw the rooster in the hen house and say it: It seems that a large part of AI is thinking up new functions with parameters to be tuned. These parameters are called something exotic and words “neural” and “network” is used liberally.


I would not call it the “AI winter”. If you look at what people have called AI over time, the definition and the approaches have evolved (sometimes drastically) over time.

Instead of being stuck on the fact that deep learning and the current methods seem to have hit a limit I think I am actually excited about the fact that this opens the door for experimenting other approaches that may or may not build on top of what we call AI today.


I completely agree. Deep learning has been doing this most recently and I'd argue that the very existence of AI winters is really just deflated expectations that always were overly ambitious.

My own way to look at AI winters is as triumph of Scruffy pragmatism over Neat delusions. Intelligence is unimaginably complicated, there is no reason to believe that silver bullet algorithms even exist, let alone the probability that the latest fad is it.


AI winter is more about the outlook of the research. Nobody can deny there are huge untapped opportunities in the applications.

It's difficult to say. We're certainly making impressive advances now, but this is not the first time that AI has had a period of rapid improvement. The first and second AI winters were both preceded by periods of intense optimism [1].

I personally believe in the long-term vision of AI/ML/RL. Progress will continue to be made in the long run. However, we don't know what we don't know. We may hit a wall in a few years that takes decades to overcome. On the other hand, perhaps the current rate of innovation will continue for the forseeable future. Time will tell.

[1]: https://en.wikipedia.org/wiki/AI_winter


I think that depends what your expectations are, and what you mean by another AI winter.

We're just scratching the surface of what's possible with the current state of the art. Even if there are no major advances or breakthroughs in the near future, LLMs and associated technologies are already useful in many use cases. Or close enough to useful where engineering rather than science will be sufficient to overcome many (though not all) of the shortcomings of current AI models. Never mind grandiose claims about AGI, there's enough utility to be gotten out of the limited LLMs that we have today to keep engineers and entrepreneurs busy for years to come.


Yeah, with all the people making ridiculously overstated claims about what deep learning can do I'm pretty sure we've got another AI winter coming.

And it's a shame. I really want to see more genuine progress in AI research; I really want to understand _what consciousness is_. But this boom/bust cycle that happens every time there's a tiny bit of real progress is a painfully inefficient way of getting there.


There's not going to be another AI winter. The past AI winters occurred because people drastically underestimated how difficult AI would be. MIT (Seymour Papert specifically) thought computer vision could be solved by some graduate students as a summer project. Same story for other AI problems, e.g. NLP, speech recognition, general reasoning and inference, etc.

Once the difficulty of those problems started to be understood, of course funding dried up. Industry is focused on short-term ROI, so it's hard to get funding if the profits won't be seen for 50 years.

The difference is that now there's an entire string of profitable markets for solving near-term AI problems. AI is fundamental to the business models of some of the world's largest companies (i.e. Google). There's basically zero risk of an AI winter when we're on the verge of advanced robotics, Watson-style Q&A systems, self-driving cars, large-scale genomics, etc.

An AGI winter is another story, but most AI work isn't really focused on serious, full-scale AGI right now anyway. That winter never ended and is ongoing. Everyone is focused on incremental AI improvements because no one really knows what's required for full AGI, and are in the meantime hoping they'll hit on it while building on known techniques like deep learning, comp neurosci, etc.

tl;dr: As long as investors continue to see marketable products for new AI developments anywhere in the next ~10 years, funding won't dry up again.


Assuming we're all going to be deep learning programmers is quite a bit foolhardy. I think what's really relevant to consider is AI winters can and do happen[1]. I would not disagree deep learning has done some amazing things, what I would say is it does have limitations.

What causes AI winters is when an advance such as deep learning can be applied to new problems and leads to increased interest. And while this new thing is really good as a subset of problems and impresses the public, of course it can't displace humans at everything and naturally has it's limitations.

So funding pours in, everyone gets hyped, and then those natural limits are (re)discovered and everyone gets all anti-AI research. Of course many people knew the limitations all along, but the dream is gone and so is a lot of funding until the next thing comes along.

This is probably natural to a lot of fields but AI just seems more prone to these boom and bust cycles because it's really exciting stuff.

[1] https://en.wikipedia.org/wiki/AI_winter


I think the most important question is what 'winter' really means in this context. The new concepts in AI tend to follow the hype cycle so the disillusionment will certainly come. One thing is the general public see the amazing things Tesla or Google do with deep learning and extrapolate this thinking we're on the brink of creating artificial general intelligence. The disappointment will be even bigger if DL fails to deliver its promises like self-driving cars.

Of course the situation now is different than 30 years ago because AI has proved to be effective in many areas so the research won't just stop. The way I understand this 'AI winter' is that deep learning might be the current local maximum of AI techniques and will soon reach the dead end where tweaking neural networks won't lead to any real progress.


Doubt it. I've seen a couple instances of experienced devs spooling up some interesting and objectively valuable machine learning systems, going from a lunchroom chat to production code in a very short amount of time. If there is a winter, it will be the result of AI becoming an off-the-shelf commodity; on the other hand, it should be easier to justify investments in research, given the demonstrated value of ML today.

The AI winter of the 80s was caused by over-hype, hardware companies whose markets were too small and whose products quickly fell behind the Moore's Law curve, and funding cuts by the US government.

next

Legal | privacy