Scary stuff, and it makes me think that the days of anonymous online forums are coming to a close. Even HN, unfortunately. I'm not sure how you stop this from happening without increasingly invasive KYC-type measures.
Checking sources, looking for tells, assuming the other person isn't your friend until some basic trust has been established?
I've been online since I was 13 or so, I saw the tail end of Web1.0 and old school forums and back then the default was nobody knew who you were so the default assumption was that you were full of shit until proven otherwise - and that was before 4chan got big.
Social Networks were never really social, now with the pressure of avoiding impostor bots, I think there might be niches for actual web of trust social networks again
or to have meaningful conversation you'll have to register so identification and then be under scrutiny for trollish behavior, at least to minimize bad actors, you'll never get rid of all of them.
Won't be long until the Government puts themselves in charge of this, making sure you only communicate with approved sources. For the children, of course. And terrorists, or something.
I read a great comment but forget it completely. It basically was the human algorithms are generally predictable and will be easy to simulate on AI for the long run. The real concern is unpredictable AI that isn't human based. Something like that.
tl;dr, but not surprising at all. Forums (including HN) haven't evolved to deal with motivated saboteurs yet. Basically, any content with correct grammar and spelling have already cleared the most useful moderation barrier, which historically kept out the inarticulate lunatics and primitive bots. Probably the only way to deal with this is networks based on invitation (like lobste.rs) and a twitchy ban hammer.
Making a popular public space invitation only or super ban happy means the attackers won. In that case they have successfully made the space inaccessible to the public or made it so that normal users are now getting banned.
if I owned a social network I would start trying to differentiate on quality of generated content instead of demonizing automated user agents. I think its definitely a cant beat em join em type scenario
I have always been thinking that AI could generate authentic looking comments, to conclude that Reddit would mostly be a gigantic playing field for astroturfers.
reply