Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login
Taking over subreddit with a small GPT-3 army turns out to be surprisingly easy (old.reddit.com) similar stories update story
49 points by EGreg | karma 7296 | avg karma 0.72 2022-12-19 16:54:58 | hide | past | favorite | 24 comments



view as:

Scary stuff, and it makes me think that the days of anonymous online forums are coming to a close. Even HN, unfortunately. I'm not sure how you stop this from happening without increasingly invasive KYC-type measures.

I wonder if it's already somewhat pervasive on HN.

I see posts that don't quite make sense. Well, we have people here whose first language is not English, so you can't expect perfection.

I see posts that have a definite agenda, or spin on reality, or bias. But humans have those things, too.

But I see the volume of this going up in the last couple of years. I wonder...


There’s a certain topic I see those on, but I think they’re a mix of state actors and rabid partisans wrt the issue in question.

Similar from 4chan: https://www.youtube.com/watch?v=efPrtcLdcdM

And they only found out 1 bot out of 9!!


Related, ish: Twitter bots potentially posting with ChatGPT-generated contents https://twitter.com/levelsio/status/1604841600416624642

For malicious actors, life is getting easier. We may find social networks or online circles where being properly introduced becomes a thing again.

Isn’t that already a thing? In my world all social networking is via group chats and texts threads.

That may be true on Facebook, but the same when reading Reddit, Twitter, Stackoverflow, etc.

How would you tell how much to trust what you read?


Checking sources, looking for tells, assuming the other person isn't your friend until some basic trust has been established?

I've been online since I was 13 or so, I saw the tail end of Web1.0 and old school forums and back then the default was nobody knew who you were so the default assumption was that you were full of shit until proven otherwise - and that was before 4chan got big.

Social Networks were never really social, now with the pressure of avoiding impostor bots, I think there might be niches for actual web of trust social networks again


or to have meaningful conversation you'll have to register so identification and then be under scrutiny for trollish behavior, at least to minimize bad actors, you'll never get rid of all of them.

Or web-of-trust as done in PGP to vouch for users you didn't directly know.

Won't be long until the Government puts themselves in charge of this, making sure you only communicate with approved sources. For the children, of course. And terrorists, or something.

I read a great comment but forget it completely. It basically was the human algorithms are generally predictable and will be easy to simulate on AI for the long run. The real concern is unpredictable AI that isn't human based. Something like that.

tl;dr, but not surprising at all. Forums (including HN) haven't evolved to deal with motivated saboteurs yet. Basically, any content with correct grammar and spelling have already cleared the most useful moderation barrier, which historically kept out the inarticulate lunatics and primitive bots. Probably the only way to deal with this is networks based on invitation (like lobste.rs) and a twitchy ban hammer.

Doubt that would work to keep bots out.

Anyone can invite bots or hook a bot up to their own accounts. Then what?


Then anyone who is higher up the invite ladder can ban them, and everyone they invited. Shouldn't take long to weed out the few miscreants.

How would they ever find out in the first place?

The same way people find out now - users not contributing meaningfully, or sabotaging communication by being obtuse.

Making a popular public space invitation only or super ban happy means the attackers won. In that case they have successfully made the space inaccessible to the public or made it so that normal users are now getting banned.

Nah, the public can still read everything on there. Just not participate, unless someone already in the community deems them trustworthy.

if I owned a social network I would start trying to differentiate on quality of generated content instead of demonizing automated user agents. I think its definitely a cant beat em join em type scenario

If some are better then they are all better and dwarf human content

Then when they bring capital too, why have humans at all? What would the humans add to a typical discussion?


I have always been thinking that AI could generate authentic looking comments, to conclude that Reddit would mostly be a gigantic playing field for astroturfers.

Legal | privacy