Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Right, but GPT-3 can be used generally. That's the difference. It scales because you don't need to build an entirely new model for each different use case.

You just change the prelude and use it for something new.



view as:

It sounds like a big deal. What a tempting idea. And a colleague was mildly annoyed with me for how unimpressed I seemed.

But you have to understand, the use cases you mention are shallow and limited. The heart of GPT, the fine-tuning, is gone. And it looks like even OpenAI gave up on letting users fine-tune, because it means they essentially do build an entirely new, expensive model for each use case.

I wanted to make an HN Simulator, the way that https://www.reddit.com/r/SubSimulatorGPT2/ works. But that's far beyond the capabilities of metalearning (the idea that you describe).


I think the onus is on you to prove that the use cases are shallow and limited. I've seen GPT-3 already being used for diverse and interesting ideas that would not have occurred to me personally.

However, even if they are, the point stands: currently, there are teams of people at companies all over the world tuning models for these shallow and limited use-cases. GPT-3 can replace them all, without OpenAI needing to invest another cent in training for a particular customer's use-case. That is in fact game-changing for the ML/DL world and current applications thereof.

Is it AGI? Obviously not. But the vast majority of ML applications don't need to be.


What other proof would you like, other than an example of what I wanted to do and can't?

(https://www.reddit.com/r/SubSimulatorGPT2/ but for HN.)

For a more extensive rebuttal, I wrote one here. https://news.ycombinator.com/item?id=23346972 Though that was more a rebut of GPT in general as a path to AGI than metalearning in particular for generating memes.


GPT-3 not being suitable for your particular use case does not mean that all use cases are shallow and limited?

That being said, I'm not sure I understand why you can't use GPT-3 to make an HN simulator.


What are the diverse and interesting ideas that would not have occurred to you personally?

>However, even if they are, the point stands: currently, there are teams of people at companies all over the world tuning models for these shallow and limited use-cases. GPT-3 can replace them all, without OpenAI needing to invest another cent in training for a particular customer's use-case. That is in fact game-changing for the ML/DL world and current applications thereof.

The counterpoint is that it would be significantly cheaper AND have better performance to fine-tune models to each customer's use case than it is to just run GPT-3 at inference.


Clearly that is not true for the commenter that started this thread.

Legal | privacy