Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Well one way out is if large language models don't just somehow magically turn into human level (or better) AGI at some point once enough data has been thrown at it. Then the whole debate will turn out to be pretty moot.


view as:

The AI shall govern itself.

Until some smart people read and understand "the book of why".

> if large language models don't just somehow magically turn into human level (or better) AGI at some point once enough data has been thrown at it

This was fundraising marketing. There is zero evidence LLMs scale to AGI.


We'd expect zero evidence either way, until it happened, in a hard takeoff scenario (which is what I've mostly seen claimed).

There's evidence that LLMs won't scale to AGI (both theoretical limiting arguments, and now mounting evidence that those theoretical arguments are correct), so this point is moot, but still.


link to the limiting arguments you’re referring to?

> We'd expect zero evidence either way, until it happened

Why? The only null we have, the organic evolution of tool-building intelligence, was iterative.


At this point there's enough capital and talent being pumped into the industry that debating about whether and how we can reach AGI is moot.

Enough or not, LLMs have shown that you can train an extremely advanced fascimile of intelligence just by learning to predict data generated by intelligent beings(us), and with that we've got the possibly single biggest building block done.


> debating about whether and how we can reach AGI is moot

To the alignment/regulation debate, it’s essential. If there is AGI potential, OpenAI et al are privatised Manhattan Projects. That calls for swift action.

If, on the other hand, the risk is less about creating mecha Napoleon and more about whether building Wipro analyst analogues that burn as much energy as small nations is economically viable, we have better things to deliberate in the Congress.


I think part of the trick is that LLMs actually do a lot of impressive "intelligent" thinking /but it happens during training/.

So if you leave the training time annd expensive out, it looks like it's a lot cheaper to produce intelligence than it really is.


Legal | privacy