Annoyingly they moved into shilling stupid AI ventures which is frustrating a AI has real, legitimate uses in its current form. Now we have a bunch of shills acting like we’ve achieved AGI if only you’d invest in their GPT chatbot
ChatGPT ain't that. GPT-4 isn't that either, or GPT-5, or 6, or 7, or any of them. LLMs are a fun gimmick, but regurgitated text from comment sections and blogs isn't even one tenth of one percent of what's needed for "Godlike AI". Consider the self driving taxis in San Fransisco that are defeated by a humble traffic cone. Sam Altman's hype machine is hurting AI research, not helping it, by leading people down flights of fancy and easy dollar signs (like his plan of building a "GPT App Store") instead of the real methodological research that's needed to create AGI.
Views. I'm inundated with AI content but most of it lacks any substance. It's mostly "wow GPT is really dumb and can't behave like this supergod AGI I just made up" to "wow GPT will take over all our jobs in 3 years, it's so powerful".
It's a necessary marketing tactic lately to surface AI projects above the bullshit hype. The AI space hypesters who do podcasts and speak at conferences do not care about cringe.
The real deterrent to adding GPT to a project name is a cease-and-desist from OpenAI.
This reminds me of several instances where people asked me if I could build them an AI option/crypto trading bot and they will share profits 50/50.
I guess saying GPT-3 instead of AI or machine learning gives more marketing hype points. Like calling a landing page a "SAAS". (It's a joke, I hope you don't offended by it).
The last thing want is to talk to a AI bot when calling a company or health provider with questions. Due to where I live and my accent, these voice bots never work. So, anything to stop these from being commercialized is good to me.
But these articles about AI are nuts, some state AI will destroy all life on Earth. That was a headline I ran across that was suppose to be signed by some scientists. I did not read it because it sounded crazy.
Also, these GPT* things is not really AI, but word/sentence parsers and probably some fancy database lookup.
Just watch as employees everywhere start using bots to keep their engagement stats up. Boards will start to have GPT generated fluff that drowns out any residual use those tools had.
The cynic in me says that you have identified why big tech companies spend so much money funding GPT and similar research:
Soon, your appeal will be replied to by an AI, too. That way, most people will have the impression that they were reconsidered by a human and found in fault, which will likely make a large percentage of them give up. We're stonewalling real humans by building fake humans :(
And that means GPT could possibly reduce support costs for Amazon.
I have trouble seeing how to turn this into a viable business. Will people pay to have their ideological viewpoints parroted back at them? Maybe, but certainly not at GPT-4-level pricing. I also can't believe enough people will be in a position to afford AI queries, but would rather use RightWingGPT over GPT-4 due to some sense that the latter is full of woke bs.
I love talking to GPT, but I've tried using it for some serious stuff (research, biz-dev) and less than serious stuff (Dungeons & Dragons) and the magic is still there but it has obvious blind spots which might be solved in the future but we're talking about the present.
I am wholly amused how everyone got on that dude from Google that accused the AI of being sentient and Google about not caring, and now you have huge names in the tech industry kinda drooling over this with what I believe to be the same passion.
One was ridiculed, the others will continue to be our 'heroes'.
shillbots gonna shill. this is a blatant one, which means it's low-effort; here to stir the pot. the real agi-prop effort will be a repost bot with a real looking history.
reply