No, AI winter was when the AI people oversold the tech, then failed to deliver, and lost their funding. This is well documented in histories of the field.
Yeah but the AI winter was mostly a period of low funding for AI due to disappointment at the research in the 70s and 80 not producing useful products. No one then was expecting human level intelligence but they were hoping to get some return on investment.
The term AI winter usually refers to the late-80s collapse in AI funding after the technology of the time (particularly expert systems and Japan's "fifth generation computer" project) failed spectacularly to live up to the hype.
Before that, there were other downturns in AI funding after early technology turned out not to pan out so well. For example, the late 60s' effort at machine translation.
In fact AI was essentially paused for years during the AI Winter.
That's completely false. AI winter wasn't about AI research being "essentially paused" but about AI startups becoming essentially non-fundable. It was backlash to the intense hype that surrounded the category.
If there is a winter, I think it's going to be very different than in the past. What people used to call a "winter" was a drying-out of the funding for AI research. However, in the past, this research was funded primarily by public money, and specifically by defense budgets. And it was cut when scientists failed to produce the army of super robots the generals thought they were promised. In the present however, there is a lot of money put into AI research by the industry - Google, Facebook, Microsoft, Amazon and IBM, as well as many other, smaller companies.
The amount of investment in AI by those companies is simply unprecedented and so is the number of people who -drawn by this river of dosh, like moths to a flame- are pursuing AI as a career (even if that only means the statistical machine learning side of AI that those companies invest into).
What this means is that the current branch of AI has become "too big to fail". And that has nothing to do with how successful it is. As long as it can be monetised and the industry leads can show some return to their investment, "AI" will keep growing.
A winter, if it comes, will be a winter of knowledge- not of funds. We will end up with so much unusable, meaningless, laughably bad "research" that any significant contribution to knowledge will simply be buried under a ton of rubbish, never to be found out.
So the money will keep flowing in. But what will come out the other end will be utter nonsense.
I don't think you can claim there was any sort of "AI winter" before the expert systems hype collapsed in the late 80s. Quite the contrary: ARPA (later DARPA) spent prodigiously on various AI labs and companies and in response Japan launched the 5th Generation project. Other countries (e.g. France, with it's Centre Mondial d'Informatique, though AI was only part of it, and its many universities; UK with Oxbridge, Nottingham etc) also invested heavily. In the US, Government-Industry partnerships like MCC spent heavily on the field.
BTW "AI Winter" was a term coined (ironically by Marvin Minsky) at a panel at AAAI-84 (IIRC in Austin). The term echoed "Nuclear Winter", a theory from the late 70s or early 80s. So it's a little weird to apply it to an earlier episode, though as I say that earlier episode didn't exist.
Also, "AI Winter" was specifically addressing the commercial investment in AI technologies; research continued (e.g. DARPA continued to spend).
The term AI winter is IMHO misrepresenting the true state of affairs. Since day one, the field's modus operandi was one of overpromising and underdelivering. That has not changed. We see how Alphabet plows billions of dollars into DeepMind [1] and all they get in return is a series of game-playing bots. If unproductive activities are defunded, this creates an opportunity for productive ones to thrive. "Winter" is not a suitable word to describe this.
You need to understand "AI winter" as referring to Academic and (to a lesser extent) private funding of AI research, specifically academic AI research. It goes through these booms of optimism "We'll have self-driving cars in 5 years! Here's a hundred million" and then pessimism "It's been 15 years and all we have is driver assist features, we're going to fund grants in more practical areas now" - the pessimism is followed by a drying up of research funds for AI, even though there is not a drying up of research in general. This winter is very real if you are trying to get a job in AI research, even thought it's impact is quite limited as most CS research is not and has never been AI focused. I would say that CS in general is a very fad-driven field so this phenomenom is to be expected.
I did not post it trying to make any prediction, I just found curious that the term “AI Winter” was coined long before. The article also highlights important historical moments for AI development and how investors appear to overreact to AI results — both good and bad.
Doubt it. I've seen a couple instances of experienced devs spooling up some interesting and objectively valuable machine learning systems, going from a lunchroom chat to production code in a very short amount of time. If there is a winter, it will be the result of AI becoming an off-the-shelf commodity; on the other hand, it should be easier to justify investments in research, given the demonstrated value of ML today.
The AI winter of the 80s was caused by over-hype, hardware companies whose markets were too small and whose products quickly fell behind the Moore's Law curve, and funding cuts by the US government.
"AI winter" was a political phenomenon that happened because the AI applications didn't hold water to the hype, and all of the gullible people that invested on them got burned out by the disparity.
We are very clearly on that same path again. What leads to the conclusion that another winter is coming. But even the fact that people are talking about it is evidence it's not here yet, and as always with political phenomena, there's no guarantee history will repeat.
Anyway, none of it means people will stop applying their knowledge or studying AI. The entire thing happens on funding and PR, and the world is not entirely controlled by those two.
There's not going to be another AI winter. The past AI winters occurred because people drastically underestimated how difficult AI would be. MIT (Seymour Papert specifically) thought computer vision could be solved by some graduate students as a summer project. Same story for other AI problems, e.g. NLP, speech recognition, general reasoning and inference, etc.
Once the difficulty of those problems started to be understood, of course funding dried up. Industry is focused on short-term ROI, so it's hard to get funding if the profits won't be seen for 50 years.
The difference is that now there's an entire string of profitable markets for solving near-term AI problems. AI is fundamental to the business models of some of the world's largest companies (i.e. Google). There's basically zero risk of an AI winter when we're on the verge of advanced robotics, Watson-style Q&A systems, self-driving cars, large-scale genomics, etc.
An AGI winter is another story, but most AI work isn't really focused on serious, full-scale AGI right now anyway. That winter never ended and is ongoing. Everyone is focused on incremental AI improvements because no one really knows what's required for full AGI, and are in the meantime hoping they'll hit on it while building on known techniques like deep learning, comp neurosci, etc.
tl;dr: As long as investors continue to see marketable products for new AI developments anywhere in the next ~10 years, funding won't dry up again.
The previous AI winters were funded by speculative investments (both public research and industry) with the expectation that this might result in profitable technologies. And this didn't happen - yes, "the other previous AI which experience AI Winter (expert system, and whatever else) didn't produce useful enough technologies to have funding", the technologies developed didn't work sufficiently well to have widespread adoption in the industry; there were some use cases but the conclusion was "useful in theory but not in practice".
The current difference is that the technologies are actually useful right now. It's not about promised or expected technologies of tomorrow, but about what we have already researched, about known capabilities that need implementation, adoption, and lots of development work to apply it in lots and lots of particular use cases. If the core research hits a dead end tomorrow and stops producing any meaningful progress for the next 10 or 20 years, the obvious applications of neural-networks-as-we're-teaching-them-in-2018 work sufficiently well and are useful enough to deploy them in all kinds of industrial applications, and the demand is sufficient to employ every current ML practitioner and student even in absence of basic research funding, so a slump is not plausible.
I see the article just as an effort at highlighting Hinton and his AI research. I don't think he would say he had done it all. Toronto Life is saying it all.
I have just finished reading this interesting wiki titled AI winter defined as a period of reduced funding and interest in AI research:
reply