It sure does encode some knowledge, because it's a language model and languages already do so on their own. It's far from what you'd usually call a "knowledge model" though.
Indeed. Its a language model, not a knowledge model. But.. I would assume our brains too use language models to provide immediate, lazy responses - and resort to higher level models only when needed :-)
How do you figure that we can still confidently say it’s just a language model?
It was trained on language for the primary purpose of producing text, but that’s not necessarily all it can do. The billions of nodes and parameters it contains allows it to compute ultra complicated equations. Who’s to say some subset of those nodes aren’t forming some basic primitive used for reasoning?
At the end of the day it's still a language prediction model.
Which means whatever apparent logic you're getting out of it is from text that it has learned. not reasoning embedded within those text, but the actual text itself.
A language model by definition encodes knowledge about a language...
If you mean "it doesn't also have knowledge of the wolrd, facts etc", well, it has been trained with a huge corpus of all kinds of material, so it does.
If you mean "it doesn't understand it, so it's not real knowledge", then that's getting into philosophy/semantics, and it could be argue that it does, or that what humans do is not very different...
reply