An AI Winter is not a remote possibility. A breakthrough however seems very remote.
It's 2023 and by each passing day, models are getting outdated.
How do they know how much gets outdated every day? And what kind of and what exact training data as a "patch" would be required to keep the "accuracy" on the higher side.
Seems like no one knows. These for now are expensive computational toys.
I'm quite sure that the Winter[1] is coming again, especially if this scifi-level AGI hype continues. We've seen this many times, and I don't think there's any fundamental development that would bring about such a "qualitative" change in "machine intelligence".
The improvement in technical capabilities of neural netwoks (and RL somewhat too) has been wild and ANNs have jumped from silly toys to practical applications very quickly. But I think we are still deep in the Moravec's paradox[2].
The thing is that we tend to assess intelligence based on how human individuals are thought to differ in intelligence. Anybody can walk/drive, so walking/driving must be easy. Few master chess/painting/writing, so they must be hard.
But e.g. DeepBlue showed clearly that actually chess isn't that hard, people just suck at it. And conversely the failure of self driving cars showed that driving is actually hard, but humans are just very good at it.
I think it's the same thing with e.g. LLMs. People think that writing well is hard because humans who do it tend to have fancy degrees and high salaries. But writing/language is more likely closer to chess than it is to walking/driving. We just suck at it.
Before we have machines that run in a forest and make a sandwitch in a random kitchen, I'm not too worried about AGI overlords.
In the end those are just mathematical models and not intelligence. Humans interact based off on their experience and it is really hard to find training data with a snow storm.
reply