Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Intelligence is a Heap Paradox and trying to define where the boundary between Artificial "Intelligence" and "really good pattern matching" is a fool's errand. Intelligence is a continuum from the simplest bang-bang thermostat all the way up to the human brain.


sort by: page size:

Fair enough, but the argument could be made that even human-level intelligence is just an advanced degree of pattern matching.

Pattern matching is certainly intelligence - classical AI was focused on identifying patterns and reacting to them with techniques like decision trees.

You can make a fairly convincing argument that intelligence is nothing more than a hierarchical system of pattern matchers.


  It turns out that a lot of intellectual tasks are sort of weakly simulatable by just doing a great job at next-token prediction.
Yes, this is a good way of putting it. I've been saying for years, it's less that we're making big discoveries about what "AI" can do, and more that we're showing that many things humans do that appear complex actually reduce to something pretty simple. But that simple thing is still just fitting a pattern. It's the cases where it doesn't work, even if it only fails 1% of the time, that define the difference between pattern matching and actual human intelligence.

One thing that follows from that (that people don't like) is that we actually need to move goalposts about how intelligence is defined. "Pass a turing test" is not very valuable now. And as mode tasks are shown to be possible with pattern matching / next token prediction, we need to further refine tests away from these tasks to settle on a good definition of what separates human intelligence. It should be obvious that the distinction is there, but it's still tough to nail down. (I'd argue that by defining a "task" you've already done most of the work to solving it, so it's not to exciting to learn that AI can finish the job)


Experts have been predicting human level intelligence 30-50 years out for a while now. I don't see anything in the current line of research that will change that situation.

My measure of intelligence is creative problem solving in a mathematical discipline like theoretical physics, algebraic topology, combinatorics, etc. All the current AI research is doing is building better and better pattern matching engines. That's all very good but talking about sophisticated pattern matching pieces of code as if they were anything more seems very silly to me.

But I don't think looking at things this way is valuable in either case. Hamming has a great set of lectures on what it would mean for machines to think and in the grand scheme of things I think the question is meaningless. The real question is what can people and thinking machines accomplish together.


Today's AI is pattern matching. It's good at that.

I wonder if that is all that human intelligence is, just pattern matching? With biological quirks and urges thrown into the mix.


This is exactly the kind of flawed behaviorist centric thinking that has held AI back for the past 50 year. Until AI researchers get over their obsession with mathematical constructs with no foundation in biology, like symbolic AI and bayesian networks, we will never have true AI. Once you realize this, you see that this “paradox” isn't really a paradox at all.

The brain is not anything like the Von Neumann architecture of a CPU. It is a massively parallel prediction machine. High level thought occurs in cerebral cortex whose base functional unit is Hierarchical Temporal Memory composed of about 100 neurons. Once we have figured out how these units are wired in the brain, “difficult” problems like pattern recognition will be trivial, and “trival” problems like playing checkers will require many hours of training and many HTM units just like in a real human brain.

For anyone interested in this, I highly recommend Jeff Hawkins' book, “On Intelligence”. http://books.google.com/books?isbn=0805078533


I like to define intelligence as knowing data, but knowing data only creates idiot savants. What is lacking in AI today is artificial comprehension. What we're calling "artificial intelligence" lacks comprehension. Until the concepts handled by AI are composable, forming virtual operating logical mechanisms, and an AI comprehends by trial combinations of concepts (virtual operating logical mechanisms) we are only creating idiot savants incapable of comprehending what they do.

The issue is that intelligence itself is an ill-defined concept, and an unofficial but broadly-shared rough definition is "what separates humans (and some animals) from unanimated things". So, as soon as a machine can do something (whether it's having good memory, proving things, playing chess, translating texts, drawing a picture, driving a car, anything) it no longer belongs to "intelligence". Using that very definition, AI is an oxymoron.

The problem as I see, at least in the medium term, is that pattern matching, no matter how advanced is still just a simulation of intelligence, not 'real' intelligence.

I don't know that we can take that as a given. The "it's just pattern matching" argument is pretty old, but I'm not sure it's ever been shown conclusively that all of human intelligence can't be reduced to some form of pattern matching.

Now, in terms of "implement something on a digital computer", so far our approaches to "reasoning" (using symbols and formal logic) seem very, VERY different from our approaches to "pattern matching", and maybe in that domain such a reduction isn't feasible. Or maybe it is. I don't think anybody really knows.

My own suspicion (and that's all that it is) is that in "digital computer" domain, we should use formal logic where it makes sense, pattern matching ANN's where it makes sense, and hybridize the systems. But that approach brings in some hard challenges of its own.

And FWIW, experts like Geoffrey Hinton have argued against the need for any such hybridization and believe that neural networks (of some form... maybe one we haven't invented yet, who knows?) should be enough for human-level intelligence.

I just don't think we have the tools and understanding yet to build AGI

I would agree with that. We don't yet have the required understanding (whether or not we have the needed tools is one I'm more 50/50 on), but I do think we'll get there eventually. It's just hard to predict if "eventually" means "tomorrow" or "37,000 years from now".


Intelligence might not be scalar between intelligences based on different computational substraits. Computers beat humans at symbolic integration and differentiation since forever but don't beat humans in other areas. A near human-level intelligence will be vasly supperior in many other areas.

We just don't have a very good definition of what "intelligence" actually is. Hence we're trying to set the bar somewhere. "Surely a machine that can play chess is intelligent". Nope, that's not it. "But surely a machine that recognizes objects in images is intelligent". Nah, it's just good at recognizing patterns of pixels. And so on and so forth. It's a noisy process, but we're slowly moving in the right direction.

"Intelligence" as we have come to understand it is not some magical thing, not some exotic element that's impossible to fabricate. In fact, it's probably something mundane and insultingly simple when we get to the core of it.

Reproducing intelligence is mostly a matter of finding the correct formula, the right structures, the right technology. After that it will be boring, ordinary, even disposable. This doesn't sit well with some people who reject that on an emotional level even if they can't figure out any concrete reason why. It just can't be.

We already have enough hardware to fake intelligence, to make a believable Turing Test candidate that could win, but we've yet to figure out how. That much will become clear in the coming decades, guaranteed.

If you look at the progression in computer chess programs, where they were pathetic for the longest time until things came together and the performance of them rose exponentially, matching and then eclipsing grandmasters, it's inevitable that the same pattern will play out in the artificial intelligence space.


Part of the problem with trying to formalize this argument is that intelligence is woefully underdefinined. There are plenty of initially reasonable sounding definitions that don't necessarily lead to the ability to improve the state of the art w.r.t. 'better' intelligence.

For instance much of modern machine learning produces things that from a black box perspective are indistinguishable from an intelligent agent, however it'd be absurd to task Alpha-go to develop a better go playing bot.

There are plenty of scenarios that result in a non-intelligence explosion, i.e. the difficulty in creating the next generation increases faster than the gains in intelligence. Different components of intelligence are mutually incompatible: speed and quality are prototypical examples. There are points where assumptions must be made and backtracking is very costly. The space of different intelligences is non-concave and has many non-linearity, exploring it and communicating the results starts to hit the limits of speed of light.

I'm sure there are other potential limitations, they aren't hard to come up with.


"Intelligence" can even be satisfactorily defined or measured. The most horrifying ramification is that if artificial general superintelligence were to ever exist, we'd have no idea.

Not actually intelligence. So an artificial simulacrum of intelligence, you say?

Not a comment on if AI if AI or not but pattern recognition is intelligence. Or perhaps more strictly something that has general pattern recognition capabilities. But in any case pattern recognition and intelligence are very closely related (perhaps the same thing), so saying that AI is "just" pattern recognition doesn't seem like a good counter argument. The argument then has to be about how much pattern recognition an entity has for it to be intelligent or how general it's pattern recognition capabilities are.

I think we can agree to disagree on the definition of intelligence.

Intelligence implies a lot more than pattern recognition. Current state is just a better way to treat large quantity of data. The smart is to feed the computer the right way with good data, iterate through different ways the computer can guess a model from the data and finally interpret the final results. Computers doesn't know how to do that. Yet.


I'd think a 'simple' algorithm for intelligence would be even more distant than artificial general intelligence itself.

The most meritorious argument in the bunch is the argument from actual AI, for sure. It’s essentially an empirical argument (“we don’t see recursive self-improvement pretty much anywhere in AI, and that’s a necessary component in hard takeoff”) and it’s aimed at the weakest premise in the chain.

I think it’s tempting but ultimately fruitless to worry about defining intelligence. To shamelessly crib from one of the best essays of all time[1]:

Words point to clusters of things. “Intelligence”, as a word, suggests that certain characteristics come together. An incomplete list of those characteristics might be: it makes both scientific/technological and cultural advancements, it solves problems in many domains and at many scales, it looks sort of like navigating to a very precise point in a very large space of solutions with very few attempts, it is tightly intertwined with agency in some way, it has something to do with modeling the world.

Humans are the type specimen, the “intelligence”-stuff they have meets all of these criteria. Something like slime mold or soap bubbles meet only one of the criteria, navigating directly to precise solutions in a large solution space (slime molds solving mazes and soap bubbles calculating minimal surface area) - but they miss heavily on all the other criteria, so we do not really think slime or soap is intelligent. We tend to think crows and apes are quite intelligent, at least relative to other animals, because they demonstrate some of these criteria more strongly (crows quickly applying an Archimedean solution of filling water tubes up with stones to raise the water level, apes inventing rudimentary technology in the form of simple tools). Machine intelligence fits some of these criteria (it makes scientific/technological advancements, it solves across many domains), fails others (it completely lacks agency), and it’s mixed on the rest (some AI does navigate to solutions but they don’t seem quite as precise nor is the solution-space nearly as large, some AI does sort of seem to model the world but it’s really unclear).

So, is AI really intelligent? Well, is Pluto a planet? Once we know Pluto’s mass and distance and orbital characteristics, we already know everything that “Pluto is/isn’t a planet” would have told us. Similarly, once we know which criteria AI satisfies, it gives us no extra information to say “AI is/isn’t intelligent”, so it would be meaningless to ask the question, right? If it weren’t for those pesky hidden inferences…

The state of the “intelligent?” query is used to make several other determinations - that is, we make judgments based on whether something qualifies as intelligent or not. If something is intelligent, it probably deserves the right to exist, and it can probably also be a threat. Those are two important judgments! If you 3D-print a part wrong, it’s fine to destroy it and print a new one, because plastic has no rights; if you raise a child wrong, it’s utterly unconscionable to kill it and try again. “Tuning hyperparameters” is just changing a CAD model in the context of 3D-printing, while in the context of child-rearing it’s literally eugenics - I tend to think tuning hyperparameters in machine learning is very much on the 3D-printing end of the spectrum, yet I hesitate to click “regenerate response” on ChatGPT4 because it feels like destroying something that has a small right to exist just because I didn’t like it.

Meanwhile, the whole of AI safety discourse - all the way from misinformation to paperclips - is literally just one big debate of the threat judgment.

And so, while the question of “intelligent?” is meaningless in itself once all its contributed criteria have been specified, that answer to “intelligent?” is nevertheless what we are (currently, implicitly) using to make these important judgments. If we can find a way to make these judgments without relying on the “intelligent?” query, we have un-asked the question of whether AI is intelligent or not, and rescued these important judgments from the bottomless morass of confusion over “intelligence”.

(For an example, look no further than article we’re discussing. Count how many different and wildly contradictory end-states the author suggests are “likely” or “very likely”. The word “intelligence” must conceal a deep pit of confusion for there to be enough space for all these contradictions to co-exist.)

1: https://www.lesswrong.com/posts/895quRDaK6gR2rM82/diseased-t...

next

Legal | privacy