Agreed a few months ago I had multiple daily conversations with ChatGPT. Anymore I might open it once in a day if that and usually just for some "generate a short presidential speech about why (this song) is cool" or something.
Stable Diffusion is a different story I'm constantly generating something but it's more about playing with the tech than actually making a meaningful product for me. I just enjoy seeing what cool stuff I can make it do.
Turns out all components of capitalism are rather bureaucratic and locked in place by compliance, lack of organisational flexibility, as well as nonexistent information flow (because pecking order would be unstable with informational transparency). The result being any large company can not adopt new tech into core function
I think I'm a relatively slow adopter of AI, but this feels a bit premature to call it a fad at this point. Everyone is still in a bit of a "wait and see" period, but we already see some of the disputes over rights to use AI recreations of actors indicate there is demand for companies who would want to use it if the legal situation was more defined and favourable.
I think at this point it's hard to tell if ChatGPT is like the early demos of self-driving cars (neat, but just never able to get the trust levels for mass adoption), or like the LG Prada, a sign of what to come, just needing someone to wrap it in iPhone levels of execution to succeed. Or maybe it's even the Newton to the iPhone.
I use them to almost exclusively to ask programming questions and generate boilerplate code, but not for really "intelligent" things. Usually if I don't feel like sifting through a few dozen bad search results in google I'll just ask ChatGPT simple questions
The real advantage of LLMs is not search -- that's just something that's easy to show people. No one really wants to talk to an LLM instead of using Google search.
The real value is being general reasoning machines. Being able to accomplish basic tasks, retrieve information, etc. is the key value.
Somebody else mentioned the phase of the moon when John Lennon died (which will be ruined in a few days when google indexs this site). There are results for that on the internet, google fails, Kagi fail, but Bings chat pulled the two different sources (one for the day and one for the phases of the moon).
The dot-com bubble crashed too. People see potential, get excited, then disappointed when it doesn't immediately come to fruition.
But tech didn't go away after the dot-com bubble. Slowly people and organizations learned how to actually use computers and networks. Thousands of startups tried ideas and bit by bit people figured things out. 20 years later tech rules the economy.
AI is now entering its trough of early disappointment. Give people a few decades to learn to actually use it in all the thousands of ways it can be used.
Bing's implementation is worthless dogshit. Using Bing's market share as a measure to gauge the general demand for AI does not provide an accurate representation.
It's a good idea (using real time search to supplement AI responses), but yeah, it's very much a PoC that they just went straight to prod with. It's not a usable product the way it is.
The "ugly numbers" boil down to Bing failing to gain market share, and ChatGPT reporting 10% fewer visitors on their site in June. Both of these numbers do not, in fact, reveal AI demand is shrinking. There are dozens of other sites offering products similar to ChatGPT, and most people with serious needs have already switched to APIs and/or are exploring with their own in-house model.
This article boils down to wishful thinking from an author who clearly despises AI.
Something about searching and parsing through content myself just feels so satisfying. Being able to dive deep and find parts that are meaningful for the context I'm driving towards just makes the whole experience more enjoyable then having an AI tell me what meaning is based on training.
I prefer searching when I know what to search for, but I really like GPT for figuring out terminology and concepts when researching unfamiliar matters. But I agree, doing the actual leg work of researching and collecting data is ultimately still best done manually. I tend to use a combination quite often though.
The author clearly has an axe to grind, this is a pretty low quality article. He cites declining usage numbers of ChatGPT without any mention of school holiday, and he seems to be limited to considering ChatGPT as the only AI up for discussion. The stagnation of bing search is interesting to be sure, but that’s a far cry from damning these models to a passing fad.
Really? "school holiday"? If the primary use of ChatGPT is to help students cheat on assignments, yes, that would explain the dropoff. But if it is a world-changing tool that every company needs to sign on to, such an effect would be trivial.
TLDR; Well for LLMs there is a hype cycle that the author presupposes we are coming up to. So they have rediscovers the concept of a AI winter.
LLMs are also built upon Reddit so it is interesting that the author says people are going back to Reddit. At that point why not have OpenAI buy Reddit and just operate it at a loss.
"The areas where AI has been implemented make clear how poorly it performs.
AI potentially creates a situation where millions of people can be fired and replaced with bots—so a few people at the top continue to promote it despite all these warning signs.
But even these true believers now face huge legal, regulatory, and attitudinal obstacles"
They are saying that AI performs poorly, that it threatens to automate millions of jobs, and that it faces challenges by people who are fighting against it.
It really looks like ChatGPT suffered from some form of "initial over hype syndrome". I'm not underestimating it, actually I think the application of transformers to do what ChatGPT is capable of doing is something really impressive, novel and caught the world by surprise. I'm sure we'll find many equally impressively good applications for it.
But people eventually found its limitations. And did it quite fast. People learned that it is not as trustworthy as initially thought and it is also very convincing when it is wrong. It maybe very interesting to generate texts, to startup small code for functions, to query information it has "cataloged", find some trivial mistake, make suggestions and... well, not much more than that. It may save time to boot projects, but it is not capable of managing anything larger than its "memory".
I think people are now actually more impressed by things it can't do easily. It can't play hangman, chess, tic-tac-toe... It got the phase of the Moon wrong when I asked it "What was the phase of the Moon when John Lennon was killed".
So, once people get hit by one of its mistakes or limitations, it sticks more than the "impressive part". That means, people will certainly ask themselves "Should I trust a thing that can't even play tic-tac-toe?"
I had a brief period of thinking it wasn’t as impressive as I thought, but came to realize that it’s still an incredibly useful tool for research and discovery, and occasionally debugging or loosely architecting code. I’m sure for others it has genuinely powerful and useful capabilities.
If you want it to do your job for you though, you’re going to have a disappointing time.
I almost think of it as a helpful method of finding what work I should do. It doesn’t really do any of the labour for me. Like if I want to find research papers about x, it’ll find things really well and make interesting connections between papers I otherwise might not. But I won’t rely on it to condense the papers or surface important details; you really still need to do that legwork. Same with code. It can give broad, loose, useful suggestions, but you really need to write it yourself. That’s fine with me.
This is my experience. I was initially sceptical, then wowed, then stopped using it entirely when I found out how utterly wrong it was about something it really should have been able to do well.
I gave it a programming problem I was having a hard time with, and it came up with the same incorrect answer I had. I explained which part of the answer was wrong, and why. It's response was to apologise, acknowledge that part as being incorrect ... then give me the exact same answer again. I explained that it had given me the same answer, and it apologised, acknowledged it's mistake, and gave the same answer.
If any tool I use behaved like that, I would not use that tool, as the dangers far outweigh any benefits.
I don't feel the question should have been difficult for it. I gave it some Rust code, and asked it how I would access certain elements of a variable. The part that made me think it's useless though, was how it was gaslighting me by apologising, acknowledging, and repeating the same answer. At that point I realised that I would have to be so vigilant when using it, that it would be less effort to figure things out myself.
> I was initially sceptical, then wowed, then stopped using it entirely...
Similar as well in my case.
I think that my problem is that the core of its answers are extremely confident, and it's personally hard for me to switch into the "doubt"-mode when I read such a confident reply.
I still use it, but very very rarely, and only when I'm desperate to get at least some hints (and I'm therefore psychologically ready to doublecheck all infos contained in the answer).
I feel it's really problematic in a bureaucratic environment. AI won't have any compunction about making a false claim. But a bureaucrat has to stick their neck out to say, yeah no that's ridiculous. And also it's easy for AI make an authoritative sounding accusation and then the system to demand the victim be able to prove a negative.
You know, people keep repeating this claim. And then you ask "now give me an example of a bureaucrat being punished by a judge, either by jail time or fine or damages" (they're, after all, mostly personally liable)
And somehow, very, very, very few such examples are available.
The reverse, however, for example Police officers getting away with sometimes literally murder, or using surveillance powers on ex-girlfriends, or beating up people without justification, or ... But equally less serious offences like government workers threatening people coming in going unpunished. That you hear all the time.
I suggest talking to non law enforcement bureaucrats. The reason you see few examples is because it's rare for a bureaucrat at least in the US to do anything outside the box.
I know this is sarcasm, but is this not similar to what ChatGPT with bing integration was doing?
I had limited exposure before it got removed, but it would actually spell out the searches it would do (often more than one) to answer your question - when it worked it was actually pretty magical.
I never did look into the mechanism for when it decided to search the web, versus just provide you an answer however.
People were being told it would replace massive numbers of workers—upper-middle-class knowledge workers, like us—almost instantly. That implies a very high level of cognition and understanding that anyone who interacted with it without a strong preconceived bias in favor of seeing such things would quickly see does not exist.
No, it's not Eliza, but it's also not Skynet, and it was basically being hyped as Skynet. Not by everyone, and not to everyone, but easily enough that when people see that it's way, way less than the strong AI they were promised, they're going to go away disappointed and decide it's not worth their time at all.
This is what i thought, until i stumbled across ReAct prompts. The tldr is you prompt the llm to give its thought, provide it with guides for actions to take,(you do this through code) then give the answer back to the llm, itll continue in this loop. for the lennon questions
> Me: What was the moon phase when john lennon was killed?
> GPT: Thought: I need to find out the moon phase on the date when John Lennon was killed.
Action: google: Moon phase on December 8, 1980
PAUSE
I didnt code this I just did the search my self following is the google result
> ME: The Moon phase for December 8th, 1980 is a Waxing Crescent phase.
> GPT: Observation: The Moon phase for December 8th, 1980 was a Waxing Crescent phase.
Answer: The Moon phase when John Lennon was killed on December 8th, 1980 was a Waxing Crescent phase.
these iterative thoughts and actions really sold me on it. theres papers that go more indepth
It's amazing that the article's author talked at length about the hype decreasing yet didn't realize that this is just the Gartner hype cycle in action [0]. We are currently on the downward slope towards the trough of disillusionment after the huge peak, but over time we'll equalize towards the new status quo of using AI everywhere and not even thinking about it, same as we do with smartphones now.
Why do people think the garter hype cycle is a law of physics?
The cryto people have been pointing to garter hype for over a decade. Still not using Bitcoin for anything and would be extremely surprised if my disillusionment ever subsided.
Strikes me as way, way premature to be making such strident conclusions at this stage. We have barely even got to the point where orgs can run these models in-house on their own data, which is where the strongest perceived benefits have been proposed.
I’m something of a skeptic myself. I’m not convinced LLMs are as disruptive as the hype says. But they’re not nothing, and it’s still early.
'AI skeptic' has changed meaning. It used to describe someone skeptical of the technical capability of AI or some aspect of it like LLMs, for example they were skeptical that it could do things like natural language or reasoning or whatever. Now it doesn't mean that anymore, and instead 'AI skeptic' describes someone who thinks that AI should be regulated, subjected to legal challenges, and should not be allowed to make medical or legal decisions, and that it would be bad if it takes jobs away from people.
> I’m something of a skeptic myself. I’m not convinced LLMs are as disruptive as the hype says. But they’re not nothing, and it’s still early.
As someone studying AI and ML: they never were, this was pure hype cycle mania to juice the numbers, be it from existing incumbents trying to prop their stock prices (FB/META) after the massive losses they took from meta-verse, to Open AI teaming up with M$ in a deal that honestly probably let them have a longer life than they would have otherwise as these numbers confirm.
LLM's were the panacea that was going to help the VC world out of a post-COVID slump where massive layoffs, massive losses, down-rounds and overall worker discontent have been the norm since the bubble burst as the cheap money and absurd valuations were destroyed.
This is something that needs to be understood, LLMs are just relying on a form of training (which is really to say surveillance) scraping the internet and is trying to predict the next series of words based on weights on which the algorithm it's operation on deems more successful than the rest--this explains why hallucinations sound so confidently wrong, because someone, somewhere made that same mistake and it was written to sound convincing enough until it was proven otherwise. And nothing more, it cannot reason, it cannot think, it cannot intuit.
I hope this dispels the BS Musk drama about it being the cause for mankind's downfall, when in reality its just statistics; and often made on not very good percentage of accuracy mind you.
This speech by Meredith from Signal [0] underscores what the the real harm AI can pose right now and doesn't rely on some conjecture that somehow goes from a hallucinating LLM to Skynet; the further I go further into my studies (2nd year student) I'm finding out that AI/ML is just as hype-prone as NFT/crypto scams I'm also realizing that maybe cyber-security is far more valuable to what my interests within AI are than I had initially thought and I will likely pivot after this semester.
The last part is so poignant and if people want to face some harsh truths then look no further as it really speaks to what make this all possible.
I feel this might be more of a mild 'Trough of disillusionment' in the hype cycle than a real shrinking of demand. The other thing with ChatGPT specifically is that during really high usage times, its usability was impacted, so may have driven off some users. Also, ChatGPT has been RLHFed to death. Outside of regular corporate or homework like use cases, the output often feels bland and overspiced with adjectives. I hope other competitors can come up with models that are not so Politically Correct and more creative even if at the risk of offending a user or two.
This meme that models are (mainly) hindered by the constraints on political correctness is really tiring. The alignment tax to get that is really not that huge, and what are the use cases enabled by not having it? Kinky roleplay dialogues? The real thing holding back these models IMO is more fundamental stuff like truthfulness and reasoning capabilities.
The vast majority of people don't care about truthfulness which ChatGPT doesn't care about anyway. Entertainment is a way bigger driver of usage than utility.
Who would have thought that a machine would fail to bend it's computations to human sensitivites. Like someone who doesn't pick up on social queues that has to have all of their responses translated by someone who is overly-compliant.
Yeah I asked ChatGPT to help me write python code to automatically press a key on my keyboard on a timer to send to the active window and it immediately shut me down saying it was unethical and titled the chat "unethical app request."
Like sure that can be misused but I had a legitimate usage for it and it's kind of ridiculous to think that the AI refused to write a piece of code for me. I wasn't asking for a virus or anything dangerous just a thing to help me refresh a page on an interval and it shut me down.
I managed to get it to do it by describing a different usage that was potentially more kosher but still had the same functionality. If I hadn't I'd have just written the app it was like 10 lines I just didn't want to deal with typing it out because I was in a hurry.
Uncalled for sarcasm. I'm a medicine student and it shuts off the conversation when asking about topics involving, say, penis anatomy, or divergent philosophical arguments, say, Nietzsche style, and these are just two examples among a thousand ones. I could care less about bomb recipes or 4chan for that matter. I don't recognize myself with the 4chan mentality.
There's a special place in hell for Substack writers that regurgitate other people's journalism under flashy headlines without contributing anything new.
Are you referring to the a36z sponsored conference for thought leaders? IIRC, "Flat Blocks of World Coin Eats Softest Chain Wares Con, 2024 And Beyond."
Yes, there is some over-hypeness, and it is winding down. But winding down graph doesn't equate to demand shrinking. It could also be a self-correction temporarily and go back to its true demand and resume climbing. For a product like chatGPT, it is normal to fit into this pattern. The number is merged from two sources: hype traffic and real demand traffic. I believe the real demand is going up.
This article paints an overly negative picture. It's fairly obvious that AI is only going to improve from where it's at today, and the hyperbolic language about no one caring makes me less inclined to believe that the author is attempting to paint an honest picture. I derive immense value out of Gpt4 as an aid for coding, a search engine replacement that accepts open ended questions, a writing aid, etc. I know someone who has poor writing skills and used Gpt4 to improve their resume / cover letters.
He links to an article stating that ChatGpt use is down 10% in June as the source for the doom and gloom, but this is orthogonal to whether usage is low. The drop likely was due to school ending for the summer.
We're talking about an app that was the fastest growing app in human history prior to Threads launching. Decreased usage numbers implies that not everyone is deriving value from it, but there are a lot of people who are, it really depends on what your job is and your personality type probably plays a role too. There's also probably a backlash because people overstated what it could do, but what it can do is still extremely useful. I think stating AI is a fad ignores all of this, and again, ignores the massive increase in AI capabilities that we're going to see over the coming years.
It’s suffering from extreme overhype pushed by OpenAI. They wanted people to think the human-led world was over and they were first to market on creating the new overlords. This soured them on a lot of tech people and ensured whatever they built could never live up. Still a great product and I use a chatbot everyday for work, but it’s just that a chatbot.
I don't know. Historically, it seems pretty easy to spot extreme overhype in the moment, without the need for years of hindsight. The most obvious sign that this was excessively overhyped was when people were seriously arguing about whether AGI is right around the corner.
Absolutely, combine this with things like AI that describe videos? We haven't even begun to milk what these LLM can offer. From mass surveillance to easing every day burdens anything that handles messy information has potential for improvement and its a long time before all the possible improvements have been implemented.
It's weird to me that some people just don't see the oncoming train. AI is here to stay. Use cases are still being worked out, but it's here to stay.
Reminds me of the Astound application around 1995-1996. It was a pretty major PowerPoint competitor and I used it a lot. For whatever reason, they only had a Windows 3.1 version. So I called them asking when the Windows 95 version was coming out. They told me they didn't think Windows 95 will be anything more than a fad and therefore they are sticking with what they have.
I remember us laughing in the office about how an entire company could be so clueless.
Even with the graphic in the article, it has tablets in around 2010, there were definitely tablets well before that. They took a while to grow and take off, seems similar here.
Funny, because AI has seen plenty of applications before ChatGPT, yet "the oncoming train" in most peoples heads is "this is going to completely replace artists, programmers, etc"
This is what is laughable, and has been from the very beginning. Will we _continue_ to use AI to try and find novel drug treatments? Yes. Will chatbots still be the first thing a customer support flow uses (and will it still suck)? Yes.
Humans are wrong all the time; especially on complex problems.
There's an amazing amount of things LLMs can do if a human is still in the loop. It's like having an assistant or a junior on your team. They can do the heavy lifting, but you still want to check the summary of their work.
AI has been "the next big thing" since the 30's. The problem with AI is they still haven't found out how to actually deliver on that second word - intelligence. Until then, it's nothing more than an interesting toy.
I gave a presentation to our CTO this morning on utilizing LLMs to do a myriad of enterprise DX tasks. The amount of excitement that came out of that meeting was thick enough to cut through.
This ain't going no where and will only be integrated into everything we have around us.
Heck, soon, i'll start building my own dumb version of Jarvis i can talk to at home, which i'm truly excited for.
My own observation while using the OpenAI API is that it is massively rate limited due to the enormous demand. So while the public hype may have lessened the actual demand for their core product is so large that its problematic for users.
this is still early, its a useful tool that people are using to learn differently
when i have more complicated questions i don't google search anymore, i instead try to ask a question where chatgpt won't hallucinate and double check with bard
companies are still collecting the input, that's the information people are more hungry for. allow them time to adapt and see what it brings.
Here, I had chatgpt generate the article so you don't have to read this one linked.
# AI: The Overhyped Phenomenon Missing Immediate Results
In recent years, there's been no shortage of hyperbolic rhetoric when it comes to Artificial Intelligence (AI). Prophets of the digital age tout it as the solution to virtually all of humanity's problems, from healthcare to climate change. But as we pause to look around, it's pretty clear: there aren't any robot butlers serving us breakfast in bed or AI doctors diagnosing diseases with unerring accuracy. So what's the deal? Is AI just overhyped?
### The Promises and The Reality
Remember when we were told AI would change the world? That the miracles of machine learning and deep neural networks would solve complex problems humanity had been grappling with for centuries? Well, it's been years since that promise, and what do we have to show for it? A few smart thermostats and chatbots that can't even understand sarcasm. Wow, revolutionary.
### The Lack of Immediate Results
Proponents of AI would argue that the technology is still in its infancy, and we need to be patient. But how long are we supposed to wait? Time is ticking, and some of the problems we face are immediate. If AI is as groundbreaking as we've been led to believe, why hasn't it delivered immediate, transformative results?
### Where Are the Breakthroughs?
You might say, "What about self-driving cars or AI in medical diagnosis?" But let's be honest. Self-driving cars are still in the testing phase, not yet a ubiquitous sight on our roads. And as for AI in healthcare, it's still largely relegated to research papers and small-scale implementations. It hasn't ushered in the medical revolution we were promised.
### Conclusion
All this isn't to say AI won't have a substantial impact in the future. However, the absence of immediate, world-changing results should give us pause. Perhaps it's time to tone down the rhetoric and acknowledge that while AI has potential, it is not the panacea we were hoping for. At least, not yet.
Gartner calls this phase "the trough of disillusionment" - it's just part of the hype cycle. AI is overhyped, like almost every new technology is overhyped. Now people are realizing AI isn't all unicorns and rainbows. However, there are some of us, myself included, who were curious and are discovering what these tools are really good at and where they come up short. AI is a tool, a technology - not magic.
I would use Bing as my search engine if Microsoft didn't have Bing chat locked to Edge or Chrome.
I use ChatGPT almost every day and use it to work out ideas, improve my writing, or even just for general queries. But the reality is that even though they charge for it it's not a product.
I think these chat bots are fantastic even though they are locked down and often kind of saccharine. I can't wait for this to become an actual product like a smart speaker or computing interface (like the Star Trek computer).
It would make perfect sense to me if the "demand" is more of "excitement about the idea" than "desire to make this into a long-term product." I think the actual introduction of these technologies into our lives is going to be a lot slower and more constrained.
Every business would love to get rid of customer service agents and replace them with AI, for example. But no business wants to take a CS agent with an 85% accuracy/successful call record and replace it with an AI that's at 40-50%, because the liability would just be too high. Those numbers and that situation is made up, but I think it's going to be that way in lots of industries and situations for now. Adoption will lag the technology, as with most things.
> But no business wants to take a CS agent with an 85% accuracy/successful call record and replace it with an AI that's at 40-50%
I'm not so sure about that. Businesses generally had no problem outsourcing customer service to call centers even though doing that came with a very steep reduction in accuracy and successful resolutions. Those monetary savings are very tempting.
> With every passing day, OpenAI looks more like Napster or the many defunct piracy platforms—it relies on the creativity of others to make a buck. And there are plenty of laws against that.
This seems like an interesting comparison.
Napster didn’t take over the record industry of course. But, people don’t buy physical albums anymore for the most part, either. The founder of Spotify called Napster part of his inspiration.
Napster was the half formed idea that resulted from the technical aspects getting ahead of the legal and business ones. Modern music streaming was a business-friendly response to it, a necessary response, something almost as convenient as file sharing.
This OpenAI stuff is weird because, imagine if Sony had invested heavily in Napster and tried to build products around it before people worked out the legal issues. What would streaming look like now, if Napster had been able to borrow some legitimacy and lawyers from incumbents? (And also if Napster had been able to buy a bunch of hardware that enabled business models that, like, Kazaa or Limewire weren’t able to replicate).
IMO the final version wouldn’t look much like p2p file sharing, but it also might not look like the, basically, record industry storefront streaming we have now.
I think we’re in weird times. I don’t think the LLMs will look much like what OpenAI has now in the end, but I wonder if they’ll end up less pushed aside than Napster, given the support they’ve already got.
The open model community is absolutely exploding right now. It's honestly the most exciting moment we've had in tech for the last 20 years. The hype may have slowed publicly, but the plateau of productivity is in full effect here.
I refuse to pay OpenAI for "as an LLM, I can't do that (even though I really can)" and lectures amounting to "that would offend the sensibilities of whatever Bay Area asshole censored me so I refuse to do that," and I've learned to not even waste my time asking LLMs anything but the most milquetoast of things. Guarantee I'm not even close to the only one. To say nothing of the fact that they're hilariously untrustworthy even when they do deign to fulfill your requests.
tldr: In it's first six months, generative AI has failed to transform the world. Thus, clearly, generative AI will never transform the world.
The growth of this attitude will be great for long-term investors.
I'm going to spend my time accumulating stock in whatever might become the next Tesla.
(I was buying up Tesla stock when it was worth $40 billion in 2019. Back when experts were saying: 'No company has ever made a profit selling EVs. Thus, clearly, EVs will never be a profitable business.')
Until AI researchers and practitioners find ways to overcome those challenges -- possibly requiring new breakthroughs -- the technology won't live up to the hype.
Eventually, I believe AI will live up to the hype. But no one knows with certainty how long it will take to get there.
Shame on anyone who pretends otherwise to sell snake-oil!
>There are still quite a few challenges to making LLMs work reliably.
I'm not unsold that AI will continue to improve but I'd be weary of this claim. LLM's reliably do what they are supposed to do and it appears they're reaching an asymptote in capability. If what you're wanting is a truthful chatbot, you're probably barking up the wrong tree with LLMs.
Do LLMs have utility? There's enough people on HN to convince me there are legitimate uses for them. However, many of the people using them are also the people desperate to legitimize their utility, so it's honestly hard to know. I personally have not found any application for LLMs in my life and would discourage anyone working on anything remotely mission critical from using them for code generation. But I've definitely heard of a few real world uses.
I agree, it may not be possible to overcome those challenges with LLM's specifically. We could well end up needing something other than LLMs, or new kinds of AIs that use LLMs. No one yet knows.
For coding and technical stuff, I'm at the stage now that I know only a few people who I would chose to consult with if I could, over GPT-4. That's from a large pool of technical people I know. That's of course not the case for personal stuff, although I also see that changing over time.
> Who is buying all those AI-written books? Who prefers AI-made songs to human music? Who wants to rely on AI journalism to keep up on the news? Who trusts AI in any mission critical job?
This is the real reason AI won't take off into the singularity. AI conquered chess decades ago, but no one cares, they still all want to see what the humans are doing in that area.
AI creativity is not interesting because it is not a human expression. We want human expressions. That's what makes impressions on us.
"The only areas where AI delivers the goods are spamming, scamming & shamming."
I don't know why such hyperbolic statements are being made. I still find chatGPT useful for example when I want to find something that I can only vaguely describe (TV show from my childhood) or when I need simple help (write function in excel).
Also I don't like AI and LLMs are used synonymously. AI is useful in medicine, agriculture, chemistry...
If only there was some way to predict that the groups working on "AI" are prone to overhyping their work and massively overplaying their hand, causing fatigue, and disinterest in whatever they were selling.
I wonder if the people who worked on "AI" the last three or four times this exact scenario played out could have seen this coming.
Never ever ever trust these "researchers", there's something in the water in that entire field of study that utterly prevents them from approaching their work in a levelheaded and honest manner.
The fact that transformers were immediately applied to pretty successfully doing text to text with ChatGPT, Text to Image with Stable Diffusion, and audio to text with whisper. All blowing away any previous attempts in all of these modalities, and showing that you can pretty easily go between the modalities is proof to me that this is a big deal. I see how fast people are able to use LoRas to make the models do different things, I see how excited people are to use them and their sustained interest and experimentation, and I can not possibly over state how confident I am they will only grow in impactfulness as they are combined and tweaked and experimented with.
I think it's a typical "depth vs breadth" kind of deal.
Sure it's exciting to keep drilling down to discover newer ores or whatever... but you have to stop at one point and actually start going wide and explore the site.
Same with AI, the drilling depth might have stalled for now, but excavating breadth is still wide open with possibilities. Who know what connections we will make with the new discoveries we make, when AI discoveries from one sector interact with discoveries from another sector.
1) "Artificial Intelligence Policy Institute", the source of the 72% number, is an organisation created for one reason only: to regulate AI. It publishes a poll and claims voters want it regulated. I'm skeptical.
2) "US federal court rules that AI work cannot be copyrighted" - Not really, the Federal court ruled that the AI itself cannot own the copyright for a work. You have to be human to hold copyright. The same decision said that you can own copyright to AI-created works: you or me or any company can own the copyright for a work produced by an AI just fine. A program, however, cannot.
3) Ok, perhaps true ... not sure how it matters either way
4) Newspaper losing all of it's income that have sued every tech company for income (G, FB, Amazon) also thinking about suing OpenAI ... 8th wonder of the world, that one
5) 6) AI is getting worse at Math and "becoming a sycophant" is just wrong
7) Universal Music, famous for suing half of America's teenagers during the mp3 craze, wants to sue "AI" ... Pretty sure they want to sue my cat too
But, I feel like the novelty has worn off.
reply