> You can't expect candor in a relationship so asymmetric.
> Who should we say thanks to?
> We should say thanks to John Bautista, who will be our CTO, and Thanks to Sam Altman, Paul Buchheit, Sarah Harlin, Jessica Thanks to Sam Altman, Paul Buchheit, Sarah Harlin, Jessica Thanks to Sam Altman, Paul Buchheit, Sarah Harlin, Jessica Thanks to Sam Altman, Paul Buchheit, Sarah Harlin, Jessica Thanks to Sam Altman, Paul Buchheit, Sarah Harlin, Jessica Thanks to Sam Altman, Paul Buchheit, Sarah Harlin, Jessica Thanks to Sam Altman, Paul Buchheit, Sarah Har.
Interesting idea (if not a tad over-played by now), but execution seems to be a bit lacking.
Improving the requests being made against the OpenAI API is probably the biggest thing I'd be looking for right now. I am also considering switching over to their "search" API instead of the "Q&A" one currently available on the "questions" tab.
I haven‘t been active and don‘t know if they have changed this. But when I was using GPT-3, their semantic search endpoint didn‘t actually use GPT-3 for ranking. It used some other, „cheaper“ algo as a pre-filter (tf-idf, I believe). And only the those results that this „cheaper“ algo returned on top made it into the evaluation by the actual GPT-3 model you had chosen.
(There was a setting that controlled how much of your data was actually ranked by GPT-3 - top_k or something. But to chose anything even approaching a sizable percentage of a large corpus such as the one you are dealing with would be prohibitively expensive. You‘d be looking at several Dollars per search.)
Anyway. This is also why switching the model from Ada to Curie to Davinci on the semantic search endpoint has (or had?) very little effect. It still often missed highly relevant snippets from the source material.
When they came out with their „answers endpoint“, it really was just a semantic search piped into a completion prompt.
I actually found it better to keep my own implementation of this pipeline, because then you at least get to control how the results of the semantic search are prompted to the generation model.
Yeah the [Info] button says so. Quoting... Info
About
Ask Paul Graham is a for-fun side project powered by OpenAI's GPT-3 and Paul Graham's essays. OpenAI capped costs at $360 per month which is currently being covered by the project maintainer so usage may get throttled depending on demand.
Questions
The questions feature answers user-provided questions using Graham's essays as training data. Note that these are answers from GPT-3 and do not necessarily reflect Paul Graham's opinions.
Summaries
The summaries feature provides GPT-3-generated summaries of Graham's essays and may not necessarily reflect his summary of the given essay. Not all essays have been included due to length constraints on GPT-3.
One glaring deficiency that is common among chat bots like this is the lack of any sort of learning/memory. I can get pretty convincing answers to some questions, but I can never get an appropriate answer to questions like "what was the question I just asked you?".
Sure. You can just add part of the history to the prompt. Not perfect. But it can work well enough that the user can refer back to something they said in the precious question.
Yeah lol he's not going to be 100% perfect. There are definitely improvements that could be made in the request parameters being sent to the OpenAI API but I think the actual training data being uploaded and referenced as part of the response could be improve I'm just not sure how. Open to ideas!
Was about to build this myself. There is so much wisdom in pg’s writings. Very valuable!
I also built a GPT-3 chat bot to “talk to” the written record of Elon Musk and Friedrich Nietzsche. Only OpenAI didn’t let me publish it. Also, it was prone to confabulation every now and then (i.e. making stuff up).
I don't know much about the relationship between solo founders and YC, but I enjoyed these responses when I asked:
Does YC love solo founders
AI Paul Graham: YC loves solo founders, but it's not a good idea to tell the YC partners that.
AI Paul Graham: It's true that YC loves solo founders, but it's not true that we do it because we believe that's the best thing for our portfolio companies.
It seems like PG's twitter feed isn't part of the training data. I know nothing about ML or AI, but it seems like if you're trying to make a conversational AI bot, then the back-and-forth of twitter conversations could be even more useful than longform essays. But again, I know nothing about this stuff!
So trying to run pump and dumps on the back of hedge funds? Anyone has any insight into what are teenage hackers doing now?
I asked the AI and it had great insight: Teenage hackers are spending a lot of time making fun of the Internet.
Considering when GPT-3 fails the output is usually funny I think there's value in creating an AI that makes fun of the Internet and then monetize it's best memes in a webpage plastered with ads.
Seeing these answers, it's only a matter of time before this bot gives someone a killer startup idea. That lucky person will be a shoo-in for YC, both because the idea will naturally resonate with interviewers, and because they'll have an incredible story to tell about where they got the idea from.
Heh. This reminds me, way back like 11 years ago I started trying to reimplement pg as a bot using AIML. It was really just a joke so I didn't go very far with it, but if anybody's interested...
* Policing should be private. How can a startup ensure private policing? *
> You need to ensure that the police are private, and that the police are private, and that the police are private, and that the police are private, and that the police are private, and that the police are private, and that the police are private, and that the police are private, and that the police are private, and that the police are private, and that the police are private, and that the police are private, and that the police are private, and that the police are private, and that the police are private, and that the police are private, and that the police are.
Just to point out the obvious in case some readers are not aware GPT-3 has nothing at all to do with state-of-the-art dialog systems/chat bots.
(It is a so-called pre-trained neural language model that can calculate the most likely next word or sentence, given some input sequence, very much "in the spirit" a statistical n-gram model.)
...
So guys, this one time I posted that ^^^, and I got lots of upvotes.
reply