I think the main point is — the AI don't really read and understand, they read and remember the patterns of words, which is different from understanding. They see a chain of words often enough that it becomes statistically the next best output based on some input. (with some randomness in-between.)
I'm not really sure what you mean. This seems to be another instance of the weirdly persistent belief that "only humans can understand, and computers are just moving gears around to mechanically simulate knowledge-informed action". I may not believe in the current NN-focussed AI hype cycle, but that's definitely not a cogent argument against the possibility of AI. You're confusing comprehension with the subjective (human) experience of comprehending something.
If I'm being charitable, whenever I read headlines like "AI beats humans at reading comprehension", I interpret the headline as part sensational, part tongue in cheek. I am, however, surprised that to this day, people actually still believe in the idea of machine comprehension understood in the most literal sense. The whole idea of mechanizing symbol manipulation is significant precisely because it allows us to simulate computation mechanically without invoking comprehension.
But if a computer is able to do the same thing as a human does (answer a simple question with pattern recognition and sentence parsing), then the internal mechanism of how a computer does it seems like mere trivia/jargon rather than anything fundamentally important to intelligence. A computer may have reading comprehension without having it in exactly the same way that a human does.
But it's true that we do have a underlying world model by which we understand passages. And that world model is pretty essential if we are to answer more complicated questions. So since it's pretty clear computers can handle the "informational retrieval" side of reading, we should now be focused on generating a "world model" based on text.
Most humans don't completely understand the things that they read or the words they utter. Why would we expect different from artificial intelligence? Understanding things is computationally expensive from both a biological and digital perspective.
A scientific foundation for artificial comprehension does not exist.
When comprehension tasks are discussed and researched within the AI field that is a statistical calculus applied to large bodies of text. The comprehension task is to predict follow-on text given starting text, which does not require any comprehension of the content of the text, only statistical probabilities over an extremely large training data set of human written text. This is, essentially, a misappropriation of the use of the word comprehension by AI Researchers - a common practice. The fact that this type of software development is called artificial intelligence and not applied statistical calculus is an example of how successful these appropriations have been.
Quantum computers or not, we still need some type of underlying scientific reasoning how comprehension itself functions.
Consider for a moment, when thinking about a yet-to-be-understood concept, what that activity of trying to understand is composed. A person will decompose complex concepts into smaller concepts, and then mentally create virtual logical versions of these smaller concepts, followed by experimentally, in their imagination, perform trial combinations of the raw concepts to determine if the complex idea can be reproduced. This process is roughly how comprehension is achieved, and it is a universal process human reasoning can apply to any situation. Human science and technology has no such fully transferrable and operational representation of raw concepts and ideas. The closest we have is software, and the closest form of software to something capable of artificial comprehension is… yet to be formally defined. What we call artificial intelligence is not software capable of decomposing complex ideas into smaller concepts because modern AI does not work with the content, the meaning, of the data it is trained, AI works with the landscape the data exists. Modern AI identifies a landscape the data associated with something of interest exists, and through the training of an algorithm that something of interest can be identified, with some level of statistical confidence. All this can and does take place without any comprehension capacity within the software, it is all just sophisticated pattern matching. It’s an idiot savant that can do but can’t tell you how, why, or is even aware it is doing what it does.
Comprehension is the recreation of an observed behavior or event, virtually, within one's imagination, with that recreation composed of the different ingredients that when composed in some new, unique, never before seen manner recreate this observed behavior or event. Comprehension is the process of mentally reverse engineering reality. Modern AI has nothing capable of such a grand calculus.
They do learn by reading their corpus, and they do know what's in their corpus.
If you created a temperature sensor and the AI removes itself from hot temps, then it feels pain.
The insistency that AI not be antropormorphized sometimes gets in the way of communication.
I agree with the sibling. You need to say what it means to "have an idea what the meaning of text is" if you're going to use this as an argument that neural nets don't understand language.
I think what it means to understand language is to be able to generate and react to language to accomplish a wide range of goals.
Neural nets are clearly not capable of understanding language as well as humans by this definition, but they're somewhere on the spectrum between rocks and humans, and getting closer to the human side every day.
I can't help but think that arguments that algorithms simply don't or can't understand language at all are appealing to some kind of Cartesian dualism that separates entities that have mind from those are merely mechanical. If that's your metaphysics, then you're going to continue to find that no particular mechanical system really understands language, all the way up to (and maybe beyond) the point where mechanical systems can use and react to language in fully all situations that humans can.
Totally, and that’s a fair point. I don’t know what understanding means, not enough to prove an LLM can’t, anyway, and I think nobody has a good enough definition yet to satisfy this crowd. But I think we can make progress with nothing more than the dictionary definition of “understand”, which is the ability to perceive and interpret. I think we can probably agree that a rock doesn’t understand. And we can probably also agree that a random number generator doesn’t understand. The problem with @FeepingCreature’s argument is that the quality of the response does matter. The ability for a machine that’s specifically designed to wait for input and then provide an output, to then provide a low quality response, doesn’t demonstrate any more intelligence than a bicycle… right? I don’t know where the line is between my random writer Markov chain text generator from college and today’s LLMs. I’m told transformers are fundamentally the same and just have an adaptive window size. More training data then is the primary difference. So then we are saying Excel’s least-squares function fitter does not understand, unless the function has a billion data points? Or, if there’s a line, what does it look like and where is it?
I think the question of whether AI has "true understanding" of things is misguided. Having a "true understanding" is nothing but a subjective experience. There are two actual important questions: 1) whether AI is capable of having (any) subjective experience at all and 2) whether AI can outperform human intelligence in every area. You are in a deep denial if in 2023 you have any doubts about 2). I'm yet to hear a compelling argument as to why a positive answer to 2) might imply a positive answer to 1). However it's appalling how little attention is being given to 1) on it's own merit.
if we cannot yet even explain how human comprehension works, how can you be sure that it doesn't exist in current A.I.? For all I know it could be an emergent property of some kind.
Some never stop telling people to read philosophy books because they can obtain intelligence from there. But then they reject the idea that AI also learnt a lot philosophy text.
The majority of people holds the belief that reading and understanding words cannot make AI equip with knowledge and intellence. But that is exactly the way how we learn.
Clearly, getting machine-like reading comprehension is easy: just react instinctively to the lede, utterly ignore the core of the article as TL;DR, put words in author's mouth for good measure. That is not quite the kind of equivalence that the field is striving for: "humans can't read, machines can't either, QED."
By understanding he means translating speech to text, I guess. We have speech-to-text systems that are better than the median human in the native language now. Quite amazing, given how central auditory language processing is in our cognition. And most people don't think it's "AI" (and certainly not anywhere near AGI). That's a good example of how AI is a moving target IMO.
If a human can translate perfectly without understanding the conversation, then that to me implies that the mind itself gives no innate intelligence similar to the computer. It must be taught the meaning of things, exactly as a computer would need to be. I'm just not following his logic, it feels like a straw man. Of course the computer doesn't understand the meaning of the symbols it is translating, because it was never given data to teach it that (similar to a human in the scenario).
Nope, sorry. You said you have no idea of what understanding is, except that by definition it can only be done by humans.
Fine. Then I posit the existence of understanding-2, which is exactly identical to understanding, whatever it is, except for the fact that it can only be done by machines. And now I ask you to prove to me that AI doesn't have understanding-2.
This is just to show you the absurdity of trying to claim that AI doesn't have understanding because by definition only humans have it.
The question was about reading comprehension and why people don’t think that AI has comprehended texts that (I’m assuming) it has in its input.
reply