Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

No worries about precision, I wasn't trying to tell you off about the use of terminology. I assumed you meant "symbolic AI" so I made sure to clarify my assumption to avoid confusion. I'm also not a native English speaker, my maternal language is Greek. My second language is French though :)

Yes, "GOFAI" overpromised and underdelivered and that was a major reason for the two AI winters that essentially destroyed the field by freezing funding and shrinking research positions and output.

Personally, I'm neither nostalgic of older approaches, nor dismissive of modern approaches. The important thing is to have a clear understanding of the capabilities available, regardless of approach. It's obvious to me that older systems could do things that modern systems can't do (principally, reasoning and knowledge representation) just as modern systems can do things that older systems couldn't do (learning). However, there are approaches that bridge the gap, such as symbolic machine learning], like the approaches I study that learn logic programs from examples using theorem-proving techniques. There is also, of course, continued research in other branches of symbolic AI, like planning and SAT solvers, that seem to have made great progress in the last years. I think the worst that can happen now is to nip such research in the bud by denying it funding just because it's not deep learning.

Gary Marcus' article quotes Emily Bender about how overpromising, this time by the deep learning community, "sucks the oxygen out of the room" for other kinds of research. This is apposite. Research can't become a monoculture, otherwise the ability to innovate will disappear. For innovation, there must be diversity of ideas. The risk I see right now is that such diversity will be lost and that, in the long run, progress in machine learning will stall. Throwing out everything that was learned in the fist 50 years of AI will not help anyone avoid the mistakes of the past, for sure.



sort by: page size:

Yes, sorry for the confusion I was specifically referring to symbolic AI not the "hybrids" between symbolic approaches and machine learning. My education is in a weird mix of French and English (even in a French university in Montréal, domain specific terminology was very spotty in French but just present enough to be confusing lol) so I'm not always very precise! Even "deep learning" is a term I'm not very keen on using usually since it's so vague but it was already 3am for me and I didn't want my comment to be longer than it was haha.

I totally agree that there is room and even a need for more symbolic systems within deep learning but I'd argue that you can't at this point do away with the "deep layered" approaches.

The examples you cited are very important achievements, especially the very early ones, but I think they also show that they are also very limited in a lot of ways. For example, expert systems found a niche, but they still had a very hard time with edge cases and learning which imo is essential to intelligence. More traditional logic based algos can vastly outperform say, neural networks in a lot of situations but only when the problem space is in a way "known". Plus, the GOFAI school used to promise a lot, lot more than what those performant but usually hyper specialized systems ended up doing.

I see that my comment could come off as disrespectful for what was accomplished before. But it really isn't!

It's just that I don't agree with the "nostalgics" who usually dismiss the modern approaches and idealized some sort of symbolic vision of intelligence. Those aren't common, and most of my "old school" professors were just as excited by deep learning. But there is a vocal minority imo who are viewing the past with rose tinted glass, when I don't think it's controversial that there is no real way for traditional ("pure") symbolic AI to end up achieving either general intelligence or to outperform deeplearning with finely tuned hand crafted logic.


The idea that symbolic AI lost is uninformed. Symbolic AI essentially boils down to different kinds of modeling and constraint solving systems, which are very much in use today: linear programming, SMT solvers, datalog, etc.

Here is here symbolic AI lost: any thing where you do not have a formal criteria of correctness (or goal) cannot be handled well by symbolic AI. For example perception problems like vision, audio, robot locomotion, or natural language. It is very hard to encode such problems in terms of formal language, which in turn means symbolic AI is bad at these kind of problems. In contrast, deep learning has won because it is good at exactly these set of things. Throw a symbolic problem at a deep neural network and it fails in unexpected ways (yes, I have read neural networks that solve SAT problems, and no, a percentage accuracy is not good enough in domains where correctness is paramount).

The saying goes, anything that becomes common enough is not considered AI anymore. Symbolic AI went through that phase and we use symbolic AI systems today without realizing we are using old school AI. Deep learning is the current hype because it solves a class of problems that we couldn't solve before (not all problems). Once deep learning is common, we will stop considering it AI and move on the to the next set of problems that require novel insights.


Yes I agree with all your points - I was however responding to the point being made that symbolic AI "wasn't useful"...which in the past it was. Perhaps in the future some new method or breakthrough will mean it becomes useful once again?

this is a great point.

much like deep learning was invented decades ago but didn't become feasible until technology caught up, could the same be true for symbolic AI?

i.e., is the ceiling for symbolic AI technical and transient or fundamental and permanent?


Symbolic reasoning based AI goes back to at least the 1950s. You're missing almost a half-century of history there.

Is symbolic AI dead nowadays? All AI papers I'm seeing is on machine learning.

and in how many recent applications has this "old" symbolic "AI" surpassed ML? I agree with the parent commenter.

>"hit something of a brick wall decades ago"

Is true. Why do you disagree? What improvements did "old-school, mostly symbolic AI" bring to the current field of research?

Sure, ML has failures - but those failures are in applications and fields where old school symbolic AI can't even reasonably be applied to. We have to start somewhere and just using symbolic AI is far behind in terms of the requirements we have currently.

>"How many layers do you need and why? How many training cases do you need and why? What has the network learned and how do you know that? What important things has the network not learned? When will it fail?"

A lot of these issues have been addressed in many recent papers. A lot of these papers have been solely focused on understandable/explainable machine learning which is an overarching topic that covers all your questions.

>"Until you can answer these questions, you're not doing science."

So, you are essentially saying a large part of CS academia is not doing "science". I'm not sure what kind of "science" you have done to make such comments. But I'm pretty sure there are plenty of researchers out there who are far more of an expert than you are in this field would wholly disagree with you.


>Historically, symbolic techniques have been very desirable for representing richly structured domains of knowledge, but have had little impact on machine learning because of a lack of generally applicable techniques.

Is this generally true? I mean, "impact" can be measured in different ways, but this paragraph gives impression that symbolic logic was always orthogonal to ML. However, there was clearly much research in that area.

Here is just one example:

http://www.doc.ic.ac.uk/~shm/Papers/lbml.pdf

Frankly, I don't understand why the field of symbolic AI was so thoroughly abandoned. Contrary to popular belief, it did deliver results - a lot of them. It had a good theoretical foundation and years of practice. It could (with ease) do a lot of neat tricks, like systems explaining why it made certain decisions. Not just that, you could implement those tricks after you implemented core functionality. And most importantly - it was scalable downwards. You could take some ideas from a complex system, put them into a much simpler system on vanilla hardware (i.e. normal application) and still get very interesting and useful results.


I don't think so - I was involved in AI research from about '89 to '95 and the field was pretty much dominated by symbolic/logical approaches at that time although these were arguably running out of steam (I left the field because I blundered into the web in '92 and founded a startup in '95).

I think we're conflating two things: shallow/classic ML is not symbolic AI. I'm not sure "ML" even encompasses anything "symbolic"; I see symbolic AI and ML as subfields with little overlap.

I'm not saying symbolic AI has been GPU accelerated in the past, but that non-deep ML has been.


Good article, but there is a frequent misconception of Symbolic AI as "manual creation of lots of rules". This was true for early approaches, such as expert systems in the 70s/80s. Symbolic AI just means, well, AI with symbols, and there are many approaches (e.g., in neuro-symbolic AI or using probabilistic inductive logic programming) where symbolic representations are emergent / learned from data using machine learning approaches and can be uncertain/probabilistic.

In the linked talk "From System 1 Deep Learning to System 2 Deep Learning" by Yoshua Bengio, the speaker first criticizes Symbolic AI only to re-invent concepts from Symbolic AI later (e.g., "high level semantic variables", "shared 'rules' across arguments"), which is rather silly given that some Symbolic AI approaches are well capable of learning symbols, rules etc bottom-up - which is not fundamentally different from learning low-dimensional vector representations or "generalizations" in linguistics.


This sounds like going back to AI of the 80s. IMHO, symbolic reasoning is unlikely to lead to progress.

Symbolic AI fell out of favor because it was overhyped. It was delivering quite impressive results--just not the promised results. Neural nets fell out of favor in the 90s for exactly the same reason.

Both failures ultimately were caused by not enough computing power. Even though Deep Learning and Convolutional NNs look like major advances today, they never could have been practical before about 2005: There just wasn't enough computing power.

If modern computer power were thrown at symbolic AI the same way it's been thrown at NNs, it highly likely symbolic AI would experience similarly-impressive gains.


There are plenty of domains where symbolic AI has had every opportunity to use unlimited computing power, plus decades of design and experimentation, but has lost to deep learning.

a good area for examples is human games (e.g. chess, Go, Atari games, etc.) Symbolic AI has been pushed hard but has lost definitively to deep learning. Furthermore symbolic approaches had decades of investment, compared with less than a decade for the deep learning approaches.

Another good area for examples is natural language. Marcus admits that deep learning is the only viable approach to "speech understanding" (which really means transcription). He doesn't mention translation which demands a lot more "understanding" and where deep learning excels relative to symbolic approaches, again with decades of investment on the symbolic side and much less on the deep learning side.

Except for inherently symbolic problems like theorem proving, I can't think of any AI domain where deep learning doesn't dominate or seem likely to dominate symbolic approaches.


Symbolic AI and ML/DL AI are two entirely different technologies with different capabilities and applications, that both happen to be called "AI" for mostly cultural reasons. The success of one is probably unrelated to the success or failure of another. In most ways, symbolic AI has "faded" simply in that we now take most of its capabilities for granted; e.g., it never strikes you as odd that Google Maps can use your phone CPU to instantly plot a course for a cross-country roadtrip if you so desire, but that sort of thing was a major research project way back when.

In contrast, ML/DL AI is still shiny and new and we have a much less clear grasp of what its ultimate capabilities are, which makes it a ripe target for research.


Agree that we haven't really progressed past curve fitting. I'm hopeful that we'll see a resurgence in symbolic AI, rather than watching the whole domain freeze up for a few decades.

Today's symbolic software is just software that was written by humans. Software existed as long as there are computers. AI was never just another term for software. I don't think any human written software today captures what proponents of symbolic AI wanted to achieve 50 to 60 years ago. Well, okay, it beat Kasparov at chess in 1996, but chess algorithms were old news even in 1970. I don't think Deep Blue used anything fundamentally new. It was not an AI breakthrough, it was a feat which showed how fast computers are.

The fact is, "AI" was always about much higher ambitions, about solving truly fuzzy tasks. Recognizing handwritten digits is exactly such a problem that has been solved, even if you don't want to call it "AI" anymore because it has stopped to be impressive.


When people talk about AI today they are usually talking about machine learning systems based in neural networks.

The kind of symbolic AI described in this book went through several cycles of hype and disappointment to the point where many think it is obsolete. People often do it connect recent breakthroughs in SAT and SMT solvers with this history and for that matter production rules engines are dramatically better than they were in the 1980s but they’ve never made a breakthrough into general purpose use.


"The fiercely controversial subject that has riven the field is perhaps the most basic question in AI: To achieve intelligent machines, should we model the mind or the brain? The former approach is known as symbolic AI, and it largely dominated the field for much of its 50-plus years of existence. The latter approach is called neural networks. For much of the field’s existence, neural nets were regarded as a poor cousin to symbolic AI at best and a dead end at worst. But the current triumphs of AI are based on dramatic advances in neural-network technology, and now it is symbolic AI that is on its back foot. Some neural-net researchers vocally proclaim symbolic AI to be a dead field, and the symbolic AI community is desperately seeking to find a role for their ideas in the new AI."
next

Legal | privacy