Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

There are arguments for this position that are philosophically and scientifically respectable and to which I am inclined. But merely repeating this nostrum isn’t very constructive. Of course, I’m also repeating nostrums here, but I am not making any definitive claim here.

1. Intelligence is surely best characterised not bivalently; if, then, it’s a matter of degree, it is at least somewhat non-trivial to show that LLMs make no progress whatsoever on previous AI (soi-disantes).

2. It’s also unclear that intelligence is best characterised by a single factor. Perhaps that’s uncontroversial in the psychometric literature (I wouldn’t know), but even then, why would g be the right way of characterising intelligence in beings with quite different strengths and weaknesses? And, if ‘intelligence’ admits multiple precisifications, the claims that (a) some particular system displays intelligence simpliciter (perhaps a pragmatic notion) on one such precisification and (b) that some particular system displays some higher level of intelligence than before in that respect are yet weaker and more difficult to rebut.

3. It’s unclear whether ‘are’ is to be construed literally (i.e., indicatively) or as a much stronger claim, e.g., in the Fodorian or Lucasian vein, that some wide class of would-be AI simply can’t achieve intelligence due to some systematic limitation.



view as:

I had a feeling you had something interesting to say, but I couldn’t make heads or tails of it.

I am being serious. I just applied GPT4 to ELI5 your comment to me. I feel like I have crossed some threshold and I’m not sure if I am proud of it, but there it is.

I am still not entirely sure what your main point is, but I learned about the Fodorian and Lucasian arguments which I don’t find particularly impressive, but then again, I need a language model to explain things to me. Interesting nonetheless.

I did not know about “g” either. How did I survive for so long you may ask and it is indeed a miracle. Anyway, a nuanced understanding of intelligence seems reasonable and useful.

The concept “nostrum” was also new to me as was the word “simpliciter”. In fact I asked GPT4 for a table of uncommon words and concepts in your post and it was quite substantial.

All in all I rarely come across a post that makes me feel like an ape and sets me on a path of creativity and knowledge. Thank you for that.

Edit: obligatory response in the proper style:

“The manner in which this prose is articulated can be characterized as both prolix and imbued with a certain aesthetic appeal. One is left to ponder the origins of the author's stylistic inclinations. As for my own stance concerning the matter of artificial intelligence, it may be succinctly encapsulated as follows: "AI demonstrates utility, and as such, it possesses merit." This, regrettably, constitutes the extent of my intellectual engagement with the subject.”


You’re right—the way I wrote the original comment isn’t particularly easy to parse. I’d been reading philosophy of language all day, and didn’t pause to edit. I’ll try to rephrase here; I don’t think the points are particularly complicated, so if the exposition here is still inadequate, that’s my fault.

1. On one view of intelligence, something is either intelligent or not. On another, some things are more intelligent than others, but there’s no clear cutoff between intelligent and non-intelligent things. On the first view, it seems quite plausible that actually existing ‘AI’ (e.g., GPT) doesn’t count as intelligent. On the second view, actually existing ‘AI’ seems to be at least somewhat intelligent: more so than most other software we’ve written. If the second view is right, it’s unhelpful in many cases to simply pronounce things intelligent and unintelligent.

2. By way of analogy, suppose I say that a walking route is quite hard. I might mean that it’s very long. Or I might mean that it’s hilly. Or I might mean that it’s very boggy. Each is a perfectly good reason to say that the route is hard. So a walk that’s merely quite hilly counts as hard, even if it’s fairly short and the ground is dry.

We might say that attributions of intelligence are similar. If so, we can attribute intelligence to systems for many different individually respectable reasons. Perhaps a system is intelligent because it can respond to novel situations in some appropriate way. Perhaps it’s intelligent because it predicts a certain statistical parameter correctly. Perhaps it’s intelligent because it’s small but can correctly deal with a wide range of situations.

If the analogy is right, it would be odd (perhaps wrong) to say that a system good at one of these just isn’t intelligent because it falls down on the other measures. If so, surely GPT counts as intelligent for at least one respectable reason or another.

On the other hand, suppose I call someone tall. There’s only one way to be tall. Being fat, or having muscly arms, or having long legs but a short torso don’t count. So the analogy doesn’t apply to all concepts. Does it apply to intelligence? Initially, it might seem that it doesn’t: surely there are lots of ways to be intelligent. But I’ve heard that the psychometrics literature suggests that all these measures correlate to a great degree, and statistically can be predicted by a single-factor model (thus ‘g’). That might suggest that there really is only one way to be intelligent, and that appearances are misleading.

I am not familiar with the psychometrics literature, so I wouldn’t know; maybe the single-factor model is wrong. But my point is this. Even if the single-factor model is right, it’s only been shown to be right about humans (so far): their statistical base has comprised humans. So maybe a multi-factor model of intelligence works better for would-be machine intelligence. For example, perhaps arithmetic ability in humans is predicted well by a single factor; maybe it’s even reducible to some single form of intelligence. But we can obviously separate arithmetic ability from e.g. analytic ability in computers, to an almost arbitrary extent, by making very good calculators. (And LLMs are often not very good at arithmetic, though I gather that’s being improved.) If that is so, intelligence is more like difficulty of a walking route than tallness. And so that’s another reason to avoid straightforward denial that would-be AI is or could be intelligent.

3. It’s quite plausible that no presently existing would-be AI should count as intelligent. But we don’t know whether that’s a general limitation or not. And if there are general limitations, how general are they? For example, maybe LLMs couldn’t be intelligent but some GOFAI type thing could be. Or maybe we simply need a new architecture.

One argument we could read in Fodor is that neural networks have to implement a so-called language of thought to be meaningfully intelligent. That would be quite a general limitation, though arguably one we could overcome. (I’ve always been a bit confused by what Fodor really meant by a language of thought, and in particular what he required of mental representations, but I haven’t made a full study of him yet.)

A much stronger argument from J.R. Lucas is broadly ‘anti-mechanism’, which would roughly include everything we can presently engineer or can be run on a Turing machine. This is very strong, and not many people agree in my experience.

The point of my comment is that these matters are complicated, and the comment above didn’t really address these complications. Sometimes nuance doesn’t add much or isn’t worth it. (I quite like Kieran’s ‘Fuck Nuance’ as a lesson for all theorising, not just sociology.) But sometimes it does matter. ‘[T]here is no artificial intelligence’ is hasty enough to require a response.


you could have said all of that in a much clearer way with no loss of content

Ironically ChatGPT would have made whatever point the author was trying to make (I’m not really sure what the point was to be honest…) much more succinctly and clearly. Maybe one use of ChatGPT as an anti-bloviating converter.

ChatGPT would have used simpler words, but it bloviates like hell. It’s one of the reasons that people on HN can usually spot and downvote ChatGPT-generated comments: they go on and on for multiple paragraphs when their point can be made in one or two sentences, and they go to great lengths to hedge everything they say.

I’m not sure that the point can be made much more succinctly, although I agree the original wasn’t written very understandably. I’ve reformulated my point at greater length in response to the sibling.

I’ve tried GPT on some topics in philosophy of language, and it hasn’t really done particularly well. I don’t have any strong reason to think that such limitations will either persist or be overcome, however.


I agree—I should have edited it. I’ve responded to the sibling comment.

Legal | privacy