Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Part of the problem with trying to formalize this argument is that intelligence is woefully underdefinined. There are plenty of initially reasonable sounding definitions that don't necessarily lead to the ability to improve the state of the art w.r.t. 'better' intelligence.

For instance much of modern machine learning produces things that from a black box perspective are indistinguishable from an intelligent agent, however it'd be absurd to task Alpha-go to develop a better go playing bot.

There are plenty of scenarios that result in a non-intelligence explosion, i.e. the difficulty in creating the next generation increases faster than the gains in intelligence. Different components of intelligence are mutually incompatible: speed and quality are prototypical examples. There are points where assumptions must be made and backtracking is very costly. The space of different intelligences is non-concave and has many non-linearity, exploring it and communicating the results starts to hit the limits of speed of light.

I'm sure there are other potential limitations, they aren't hard to come up with.



sort by: page size:

I think it’s more an issue of talking past each other. In order to build a program, you kind of need to specify the required capabilities, and then standard engineering practice is to decompose that into a set of solutions tailored to the problem. But intelligence is not just about a list of capabilities; they’re necessary conditions, but not sufficient.

This is what leads to the AI effect conflict. We describe some capabilities that are associated with intelligence, build systems that exceed human performance ability on a narrow range of tasks for those capabilities, and then argue about whether or not that limited set of capabilities on a narrow domain of tasks is sufficient to be called “intelligence”.

Recognizing faces, playing chess, and predicting the next few words in my email are all things I’d expect an intelligent agent to be able to do, but I’d also expect to be able to pick up that agent and plop it down in a factory and have it operate a machine; put it in the kitchen and have it wash my dishes; or bring it to work and have it help me with that. We already have machines that help me do all of those things, but none of them really exhibit any intelligence. And any one of those machines getting 10x better at their single narrow domain still won’t seem like “intelligence”.


Well that's the crux of the problem - the corollary to your question is "can you provide a firm definition of intelligence?" Nobody can yet, so it's all speculation and subjective opinion.

The reason that I personally consider all these things parlor tricks (including, hypothetically, complete mastery of Go) is that I see no path from these particular types of systems to general intelligence. A human can take in arbitrary sensory data and make all sorts of conclusions and associations with it. Does this particular system have the capability to get to a point where it can see an apple falling and posit a theory of gravity? Will it ever be able to read subtle cues in facial/oral/bodily expression and combine them with all sorts of other data, instantaneously, to achieve compelling real-time social interaction? Will this system ever invent the game of Go, or anything else, because it felt like it? No, it has absolutely no framework to do any of those things, or countless other things humans can do. It's a machine built with a single purpose in mind, and it can only serve that purpose. It's a glorified function call. I don't think this type of machine will just wake up one day after digging deeper and deeper into these "hard" tasks. We need breadth, not depth.


A big problem with the conclusions of this article is the assumptions around possible extrapolations.

We don't know if a meaningfully superintelligent entity can exist. We don't understand the ingredients of intelligence that well, and it's hard to say how far the quality of these ingredients can be improved, to improve intelligence. For example, an entity with perfect pattern recognition ability, might be superintelligent, or just a little smarter than Terrance Tao. We don't know how useful it is to be better at pattern recognition to an arbitrary degree.

A common theory is that the ability modeling processes, like the behavior of the external world is indicative of intelligence. I think it's true. We also don't know the limitations of this modeling. We can simulate the world in our minds to a degree. The abstractions we use make the simulation more efficient, but less accurate. By this theory, to be superintelligent, an entity would have to simulate the world faster with similar accuracy, and/or use more accurate abstractions.

We don't know how much more accurate they can be per unit of computation. Maybe you have to quadruple the complexity of the abstraction, to double the accuracy of the computation, and human minds use a decent compromise that is infeasible to improve by a large margin. Maybe generating human level ideas faster isn't going to help because we are limited by experimental data, not by the ideas we can generate from it. We can't safely assume that any of this can be improved to an arbitrary degree.

We also don't know if AI research would benefit much from smarter AI researchers. Compute has seemed to be the limiting factor at almost all points up to now. So the superintelligence would have to help us improve compute faster than we can. It might, but it also might not.

This article reminds me of the ideas around the singularity, by placing too much weight on the belief that any trendline can be extended forever.

It is otherwise pretty interesting, and I'm excitedly watching the 'LLM + search' space.


This manages to get lost in its own trees. From a reductionist perspective:

- Intelligence greater than human is possible

- Intelligence is the operation of a machine; it can be reverse engineered

- Intelligence can be built

- Better intelligences will be better at improving the state of the art in building better, more cost-effective intelligences

Intelligence explosion on some timescale will result the moment you can emulate a human brain, coupled with a continued increase in processing power per unit cost. Massive parallelism to start with, followed by some process to produce smarter intelligences.

All arguments against this sound somewhat silly, as they have to refute one of the hard to refute points above. Do we live in a universe in which, somehow, we can't emulate humans in silico, or we can't advance any further towards the limits of computation, or N intelligences of capacity X when it comes to building better intelligences cannot reverse engineer and tinker themselves to build an intelligence of capacity X+1 at building better intelligences? All of these seem pretty unlikely on the face of it.


There are arguments for this position that are philosophically and scientifically respectable and to which I am inclined. But merely repeating this nostrum isn’t very constructive. Of course, I’m also repeating nostrums here, but I am not making any definitive claim here.

1. Intelligence is surely best characterised not bivalently; if, then, it’s a matter of degree, it is at least somewhat non-trivial to show that LLMs make no progress whatsoever on previous AI (soi-disantes).

2. It’s also unclear that intelligence is best characterised by a single factor. Perhaps that’s uncontroversial in the psychometric literature (I wouldn’t know), but even then, why would g be the right way of characterising intelligence in beings with quite different strengths and weaknesses? And, if ‘intelligence’ admits multiple precisifications, the claims that (a) some particular system displays intelligence simpliciter (perhaps a pragmatic notion) on one such precisification and (b) that some particular system displays some higher level of intelligence than before in that respect are yet weaker and more difficult to rebut.

3. It’s unclear whether ‘are’ is to be construed literally (i.e., indicatively) or as a much stronger claim, e.g., in the Fodorian or Lucasian vein, that some wide class of would-be AI simply can’t achieve intelligence due to some systematic limitation.


Interesting and quite dense post. Singularity related articles and worries about "good" AI building makes me a bit uneasy when the performance of machines on many important tasks, e.g. object recognition, is dismal. So we have quite a way to go.

The usual rebuttal to the above is the "intelligence explosion" argument: once AIs can improve themselves, their progress will be exponential. As a tl;dr for the post, consider the Dewey summary of the argument:

  1. Hardware and software are improving, there are no signs that we will stop this, 
  2. and human biology and biases indicate that we are far below the upper limit on intelligence. 
  3. Economic arguments indicate that most AIs would act to become more intelligent. 
  4. Therefore, intelligence explosion is very likely. 
If you think about it, only (1) is a fact, the rest are assertions that are open to debate. For example, in (2) how do we define the "upper limits of intelligence", when even defining intelligence is problematic. Currently the human brain is the most complex and intelligent object we know. For (3), I don't know what sort of "economic arguments" are meant but as we all know becoming "more intelligent" is not a simple hill climbing process, in fact it's not clear how to go about it. Is Watson more intelligent than a person?

If you think about these facts for some time, you will find that AIs designing more intelligent AIs scenario has very little plausibility actually, let alone being highly probable.


It's hard to tell where this author is coming from. The three main assumptions you have to make for AGI are (via Sam Harris):

1. Intelligence is information processing.

2. We will continue to improve our intelligent machines.

3. We are not near the peak of intelligence.

The author's first counterpoint is:

> Intelligence is not a single dimension, so “smarter than humans” is a meaningless concept.

Intelligence is information processing so "smarter than humans" just means better information processing: higher rate, volume, and quality of input and output. Aren't some humans smarter than others? And isn't that a power that can be abused or used for good? We don't have to worry about it being like us and smarter; it just has to be smart enough to outsmart any human.

He then talks about generality like it's a structural component that no one has been able to locate. It's a property and just means transferrable learning across domains. We're so young in our understanding of our own intelligence architecture, it's ridiculous to build a claim around there being no chance of implementing generality.

This statement is also incredibly weak:

> There is no other physical dimension in the universe that is infinite, as far as science knows so far...There is finite space and time.

There is evidence that matter might be able to be created out of nothing which would mean space can go on forever. We might only be able to interact with finite space, but that isn't to say all of nature is constrained to finite dimensions.

Even still, he doesn't make sense of why we need infinite domains. You only need to reach a point where a programmer AI is marginally better at programming AIs than any human or team of humans. Then we would no longer be in the pilot's seat.


The article is informal about intelligence and then comes up with a couple of ad-hoc examples where computers beat humans. It's unclear that they have much to do with intelligence. The following definition of the term has been proposed:

   Intelligence measures an agent’s ability to 
   achieve goals in a wide range of environments. 
Togelius even addresses this a little by pointing out that humans have to be trained to be a pilot of a president. But it is unclear at this point to what extent computers can be intelligent in this sense. Alpha-Go's reinforcement learner, probably the most astonishing part of Alpha-Go, was not (to the best of my knowledge) Go-specific, instead, it was a general-purpose reinforcement learner. I doubt it can learn much more complicated forms of interaction without a simple reward function (such as games).

Nevertheless, I'm quite optimistic, but it's far from the foregone conclusion that the author implies it it.


I'm not sure I understand your point although I understand and partially agree with your arguments. Because we do not precisely know how intelligence works (because surely we agree it exists) that means that we can never create an equally powerful version we have or you disagree about the creation of a better version. There is nothing to prevent us from doing the first

I consider this argument flawed because it equates intelligence with human intelligence. The field is Artificial Intelligence, not Human Artificial Intelligence.

A dog is intelligent, as is a pigeon. Even bees and some mollusks, like the cuttlefish, are intelligent. They can't think in all the ways a human can but at what they do, they are competent, even clever.

I feel the same is true for machine intelligences. It takes intelligence to learn Go or Chess but it also takes intelligence to play, or at least this is what we say for humans. When the human is thinking about Starcraft, we consider only how good their thought patterns are for that game. We do not look at their vision, walking, social skills or whatever. The same should be applied for the Chess or Go AI while it is playing Chess or Go. While one can complain that all they know how to do is play Chess, anything else is unfair.


"Intelligence" as we have come to understand it is not some magical thing, not some exotic element that's impossible to fabricate. In fact, it's probably something mundane and insultingly simple when we get to the core of it.

Reproducing intelligence is mostly a matter of finding the correct formula, the right structures, the right technology. After that it will be boring, ordinary, even disposable. This doesn't sit well with some people who reject that on an emotional level even if they can't figure out any concrete reason why. It just can't be.

We already have enough hardware to fake intelligence, to make a believable Turing Test candidate that could win, but we've yet to figure out how. That much will become clear in the coming decades, guaranteed.

If you look at the progression in computer chess programs, where they were pathetic for the longest time until things came together and the performance of them rose exponentially, matching and then eclipsing grandmasters, it's inevitable that the same pattern will play out in the artificial intelligence space.


The most meritorious argument in the bunch is the argument from actual AI, for sure. It’s essentially an empirical argument (“we don’t see recursive self-improvement pretty much anywhere in AI, and that’s a necessary component in hard takeoff”) and it’s aimed at the weakest premise in the chain.

I think it’s tempting but ultimately fruitless to worry about defining intelligence. To shamelessly crib from one of the best essays of all time[1]:

Words point to clusters of things. “Intelligence”, as a word, suggests that certain characteristics come together. An incomplete list of those characteristics might be: it makes both scientific/technological and cultural advancements, it solves problems in many domains and at many scales, it looks sort of like navigating to a very precise point in a very large space of solutions with very few attempts, it is tightly intertwined with agency in some way, it has something to do with modeling the world.

Humans are the type specimen, the “intelligence”-stuff they have meets all of these criteria. Something like slime mold or soap bubbles meet only one of the criteria, navigating directly to precise solutions in a large solution space (slime molds solving mazes and soap bubbles calculating minimal surface area) - but they miss heavily on all the other criteria, so we do not really think slime or soap is intelligent. We tend to think crows and apes are quite intelligent, at least relative to other animals, because they demonstrate some of these criteria more strongly (crows quickly applying an Archimedean solution of filling water tubes up with stones to raise the water level, apes inventing rudimentary technology in the form of simple tools). Machine intelligence fits some of these criteria (it makes scientific/technological advancements, it solves across many domains), fails others (it completely lacks agency), and it’s mixed on the rest (some AI does navigate to solutions but they don’t seem quite as precise nor is the solution-space nearly as large, some AI does sort of seem to model the world but it’s really unclear).

So, is AI really intelligent? Well, is Pluto a planet? Once we know Pluto’s mass and distance and orbital characteristics, we already know everything that “Pluto is/isn’t a planet” would have told us. Similarly, once we know which criteria AI satisfies, it gives us no extra information to say “AI is/isn’t intelligent”, so it would be meaningless to ask the question, right? If it weren’t for those pesky hidden inferences…

The state of the “intelligent?” query is used to make several other determinations - that is, we make judgments based on whether something qualifies as intelligent or not. If something is intelligent, it probably deserves the right to exist, and it can probably also be a threat. Those are two important judgments! If you 3D-print a part wrong, it’s fine to destroy it and print a new one, because plastic has no rights; if you raise a child wrong, it’s utterly unconscionable to kill it and try again. “Tuning hyperparameters” is just changing a CAD model in the context of 3D-printing, while in the context of child-rearing it’s literally eugenics - I tend to think tuning hyperparameters in machine learning is very much on the 3D-printing end of the spectrum, yet I hesitate to click “regenerate response” on ChatGPT4 because it feels like destroying something that has a small right to exist just because I didn’t like it.

Meanwhile, the whole of AI safety discourse - all the way from misinformation to paperclips - is literally just one big debate of the threat judgment.

And so, while the question of “intelligent?” is meaningless in itself once all its contributed criteria have been specified, that answer to “intelligent?” is nevertheless what we are (currently, implicitly) using to make these important judgments. If we can find a way to make these judgments without relying on the “intelligent?” query, we have un-asked the question of whether AI is intelligent or not, and rescued these important judgments from the bottomless morass of confusion over “intelligence”.

(For an example, look no further than article we’re discussing. Count how many different and wildly contradictory end-states the author suggests are “likely” or “very likely”. The word “intelligence” must conceal a deep pit of confusion for there to be enough space for all these contradictions to co-exist.)

1: https://www.lesswrong.com/posts/895quRDaK6gR2rM82/diseased-t...


The author doesn't seem to distinguish between qualitative and quantitative difference in computation. His criticism doesn't really address the argument he quoted. The reference to IQ test distribution is not a particularly good analogy, as his opponent was speaking about limits of intelligence.

It is also worth mentioning that the notion of superintelligence is at odds with Church thesis. Which, while can't be proven, is widely accepted and is a cornerstone of computer science as we know it. Personally I think there's a better chance of breaking light speed barrier than of intelligence capable of what human (or hypothetical "ordinary AI") inherently can't.


The rebuttal to this view is "provide a principled definition of intelligence". Doesn't seem like the article does this.

A hint appears partway through: "computers will not be able to match humans in their ability to reason abstractly about real-world situations". Does human intelligence distinguish itself by its "abstractness" and by its application to "the real world"? There's also "the systems do not form the kinds of semantic representations and inferences that humans are capable of". Seems like a promising direction for some definition of intelligence.

For my money, we will never consider any machine intelligent as long as we mass produce it. The only way we'll accept machines as intelligent is if, as the singularity theorists say, the machines build themselves. Then we aren't really mass producing them, we're just kicking off a process that we don't totally understand, a bit like gestation.


Pray tell, why would you want to develop a being with informational superpowers and the behavior of a teenager?

There is a problem with AI, but it's not with the A part, it's with the I part. I want you to give me an algorithmic description of scalable intelligence that covers intelligent behaviors at the smallest scales of life all the way to human behaviors. I know you cannot do this has many very 'intelligent' people have been working on this problem for a long time and have not come up with an agreed upon answer. The fact you see an increase and change in definitions as a failure seems pretty sad to me. We have vastly increased our understanding of what intelligence is and that previous definitions have needed to adapt and change to new information. This occurs in every field of science and is a measure of progress, again that you see this differently is worrying.


Great take. But I think when autonomous agents become good enough, intelligence is certainly possible. Especially when those agents start to interact with the real world.

Evidence:

- Human brains exist and are generally intelligent, therefore general intelligence is possible.

- If we observe the progress in the AI field, it's pretty clear that AI systems are getting smarter at a fast pace.

- Many AI systems we have developed quickly become superhuman (Go, StarCraft, many aspects of ChatGPT and Midjourney).

- When superhuman intelligence competes against human intelligence, the thing that superintelligence wants to happen happens (e.g. it winning at Go).

- The AI capabilities progress is way faster than AI alignment progress, we have no idea how to control AI systems or make them want the same things we do.

Logic:

- What the fuck do you think will happen when you coexist in a world with a superhuman intelligence that wants something different from what you want?


I feel like this intelligence explosion idea is foolish but I don't really have the language to explain why.

There are underlying limits to the universe, some of which we still have to discover. A machine intelligent to improve itself may only be able to do so extremely slowly in minute increments. It might also be too overspecialised, so it can improve itself but not do anything of use to us.

I think we will eventually discover reasons we cannot achieve a simultaneously performant, controllable, and generally intelligent machine. We might only be able to have one or two of these traits in a single system.


This article has too many holes to count and reads more like someone in denial.

---

BUT, as an aside: I think a decent argument exists for the non-certainty of intelligence explosion.

The argument goes like this: it takes an intelligence of level X to engineer an intelligence of level X+1.

First, it may well be that humans are not an intelligence of level X, and reach our limit before we engineer an intelligence superior to our own.

Furthermore, even if we do, it may also be that it takes an intelligence of level X+2 to engineer an intelligence of level X+2 (Etc. for some intelligence level X+n.), in which case we at most end up with an AI only somewhat superior to ourselves, but no God-like singularity (for example, we end up with Data from Star Trek TNG, who in season 3, episode 16 fails to engineer an offspring superior to himself -- sure, Data is far superior to his human peers in some aspects, but not crushingly so).

next

Legal | privacy