Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

The problem is the humans - the users. They keep clicking on outrage-bait, and re-posting it. AIs are not going to fix the humans. They might turn the gain down a bit, they might cut off some of the worst stuff, but they're not going to fix the problem. The problem is us.


sort by: page size:

You just gotta get the AIs to do the upvoting, then cut the humans out of the loop all together and only have AIs read the AI generated text, and then everything will be fine. Just an endless death spiral of ai gen, ai filtering, and ai consumption, forever and ever.

Presumably at some point computers will become (already are for all I know?) the largest consumers of content on the internet as well as its producers.


It's true... the quality of content on the Internet has a bunch of problems, and AI is just one of them. The economic incentives to trick people into staying on your page and scrolling as much as possible are a fundamental part of the problem. Politically-motivated ragebait and lies are a separate huge problem. AI-generated slop is also a problem for content quality and UX, but I'm far more concerned about the impact of AI on the value of human labor and intellectual property than I am about the UX of my search result pages.

Until tech improves to fix this too. Detecting ai-generated content is a fool's errand since that is literally how they get better.

When AI submits content you don't appreciate, who will you complain to?

That's not a solution, that's just increasing their workload. It doesn't remove the root cause, nor the fact that people still have to assess a lot of content - which apparently the super smart AIs can't identify themselves.

100% agree that all the fluff is for the machines and not humans. The web has been broken this way since 2014. The feedback loop is real.

I totally hear you.

The interesting thing about this moment in AI, it seems, it’s not how good the AIs are, but how bad the lower end of human work and ability is relative to the AI.

In this case, how much SEO-farm rubbish content has been put out by salary-slave-humans and is it really better than being tricked into reading AI content?

Two evils I know. But maybe there are deeper issues than the met existence of decent non-general-AIs …?


As long as humans continue to filter out the bad content generated by AI, it should be fine.

>Sites may have to find a way to restrict the damage

The best you can do is create AI that detects AI but that's hardly effective. Twitter can't even get rid of "PUSSY IN BIO".

>search engines may have to prioritize results from sites that do

You mean the same search engines that prioritize Quora, Pininterest and Medium which have all also become progressively worse?

> but the traffic will go to the sites that best solve it. Nobody wants to read an infinite amount of AI spew.

We have already witnessed the sites with the most aggressive recommendation algorithms and quick dopamine generators like TikTok win, not sites with high quality content.


I have a sinking feeling that the 'solution' will be to use a "good AI" to combat the bad AI churning out ad ridden clickbait listicles in perpetuity.

The real problem isn't AI content generation, it's a lack of content filtering. There has been a mountain of human garbage pushed out by people trying to make money which is already too big to navigate, AI will make the mountain much larger but the problem won't change.

We need AI discriminators that analyze content and filter out stuff that is poorly written, cliche, trite and derivative.


I see 2 out of 30 posts about AI on the front page and counted 7 out of 30 on new. Doesn’t seem like a crisis to me. On the other hand I’m the kind of person who thinks the solution to “too many irrelevant feed items” is to make an A.I. that filters them.

Yeah, this seems like a justification of the 'stupid consumer' advertising-driven model of human behaviour. Human clicks on ragebait, so it must like ragebait. Until it starts avoiding platforms that send it ragebait, then the human is clearly depressed. Because why else would it avoid it's source of ragebait, which is clearly the only thing it cares about?

I've found in general that as platforms like Youtube and Facebook got more optimised for immediate feedback that's supposedly all about my preferences, they became less pleasant overall user experiences. Is it too much to ask for an AI that at least tries to help humans move towards self-actualization? I'm not saying I'd expect it to work out of the box, but some evidence our long-term interests are actually aligned would be nice.


I've been complaining about this with AI generated content in general as well, especially Twitter and blog posts. I worry that we're in a sort of downward spiral, creating a feedback loop of bad content. Eventually models will get trained on this badly generated content, and it will reduce the overall vocabulary of the Internet. Take this to the extreme, and we'll keep going until everything is just regurgitated nonsense. Essentially sucking the soul out of humanity (not that tweets and blog posts are high art or anything). I know that sounds a little drastic but I really think there's a lurking evil that we don't have our eye on here, in terms of humanity and AI. We've already seen glimpses of it even with basic ad targeting and various social media "algorithms".

We're already drowning in human generated garbage on the web. Sure, AI makes it worse, but we can't blame anything but our human compatriots and ourselves.

I see where the strikers are coming from, but isn't this an intractable problem? There's no way to tell if content is AI generated.

This is what happens when you remove all humans from the loop. The decisions made by software can be manipulated once you have a reasonable estimation of what the software is doing.

This time it's Russia. Next time it will be an American political party (whichever one you don't like). YouTube saved some money on human moderation, and all it cost was selling their platform to the first group willing to abuse the system.

And do you know how they'll respond? By trying to improve the automation. No no, no need to add costs by having humans in the loop, we'll just make better software. Because that's what worked so well for SEO, right?


But here we're letting AI handle the content.

The issue is agency -- like the God of the margins, our agency being relegated to margins at the very least raises some questions for me!


There's a fine line when it comes to AI. On one hand it's not great that OpenAI/MS/MJ get to decide what is okay and what isn't okay. On the other hand without filters, the possibility for misinformation, spam and inciting violence/drama/frustration is high since the cost to create tons of garbage is so low.

I'm more worried about AI supercharged bait-click content than GPT-5 singularity.

next

Legal | privacy