Well one way out is if large language models don't just somehow magically turn into human level (or better) AGI at some point once enough data has been thrown at it. Then the whole debate will turn out to be pretty moot.
We'd expect zero evidence either way, until it happened, in a hard takeoff scenario (which is what I've mostly seen claimed).
There's evidence that LLMs won't scale to AGI (both theoretical limiting arguments, and now mounting evidence that those theoretical arguments are correct), so this point is moot, but still.
At this point there's enough capital and talent being pumped into the industry that debating about whether and how we can reach AGI is moot.
Enough or not, LLMs have shown that you can train an extremely advanced fascimile of intelligence just by learning to predict data generated by intelligent beings(us), and with that we've got the possibly single biggest building block done.
> debating about whether and how we can reach AGI is moot
To the alignment/regulation debate, it’s essential. If there is AGI potential, OpenAI et al are privatised Manhattan Projects. That calls for swift action.
If, on the other hand, the risk is less about creating mecha Napoleon and more about whether building Wipro analyst analogues that burn as much energy as small nations is economically viable, we have better things to deliberate in the Congress.
reply