Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

The real underlying issue is that high profile accounts are targeted by groups of users who "report abuse" simply because they don't like that sports team/politician/etc...

High profile accounts cannot work under identical rules or they'd simply all be suspended all the time.



sort by: page size:

You have to remember that high profile accounts get 10000x the number of abuse-reports than a normal account - nearly all bogus. The normal automated moderation functions simply do not work.

Many users will simply mash the "report abuse" button if they see a politician they don't like, or a sports player for an opposing team.

If the normal rules applied identically to everyone, all high profile accounts would simply be inactive in perpetuity.

Maybe a better system would penalize reporters that are found to have reported content that do NOT violent content policies?


Then again, if all high profile accounts were exempt from being auto banned then there would be even less chance of problems being brought to light.

this is totally reasonable and arguably necessary for a platform. High profile users are going to be targeted for reporting much more, and the consequences of a false suspension are more visible for them. Think of it like how you might whitelist people who you trust from auto moderation or rate limits on Reddit or some other site if you trust them because the auto mod makes mistakes or you want to give them more options. Obviously FB and Twitters moderation sucks, but that isn’t related to this directly

Is it really that bad that they apply slightly different sets of rules to accounts with more notoriety?

For example, do we (as facebook consumers) want newly created accounts with @hotmail email treated the same as a new account with @doj.gov, as the same as a Celebrity with a million followers?

Do we want the same set of rules for a suspected Russian troll account to be applied to a major politician? (well..some here might, but I don't).

I think as your account age, status and popularity grows, you should be given *some* flexibility under the rules. Imagine a points system behind the scenes, where bad things get you points, and other things remove points. At a certain point threshold you are banned, suspended, etc.


My guess is that when such high profile cancellations happen, it's more of a matter of perceived zugzwang.

Some platform does a high profile ban, and then the PR inboxes of the other platforms are set ablaze with journalists asking for comment on how your particular platform wants to handle that person - high profile bans make nice clickbait articles after all - and the twitters and reddits and youtubers and so on of this world talk about what those other platforms are doing/are not doing/should be doing. At this point the other platforms have to do something, either issue a ban as well, or issue some kind of statement why at this time they won't be issuing a ban. Usually the option to just issue a ban as well is a lot easier and PR-wise "safe" than trying to come up with a "defensive" statement of why such a ban is not (yet) appropriate.


No, that's the exactly problem. Nobody knows why they ban accounts. And once an account is dead, it stays dead, no matter how much bad press they get (see the guy that sent a photo to their doctor). I've seen well described in another comment: it may as well be a guy that throws darts at a list of usernames

Useful to consider how many accounts that are not high profile were deleted and if these high profile accounts were a casualty ofa much more aggressive attempt to silence a segment of voices then then next question is, what is the criteria being used?

My understanding is that part of the reason these accounts were on a special list is that they were getting reported a lot. For nothing.

Like, Doug The Pug might get reported a thousand times per post for animal abuse.


It makes sense they would moderate certain accounts differently. Controversial accounts will get reported because people don't like what's posted, even if the content is perfectly legitimate. I'm sure "Doug the Pug" is constantly reported as animal abuse, for instance [0].

The real issue is that these accounts should be scrutinized even more than "regular" users, not less.

[0] https://en.wikipedia.org/wiki/Doug_the_Pug


I always wonder why not just rely on the same user reporting and automation systems to downrank, gray out, hide from indexing, mark content as controversial, instead of banning entire user accounts?

Then repeat offenders can be banned with good reason. It seems absurdly heavy handed to remove legitimate accounts, irreversibly and without recourse, for a single post or comment, this should not be legal.


I find the account suspensions for innocuous things pretty hard to believe - since I'd suspect any action like that came from your friends reporting your content, not automated culling of users.

I wonder if the solution involves partitioning the social graph to allow accounts to coexist?

Instead of trying to censor accounts, because I’m going to assume accounts aren’t used purely for offensive content — that’s the easy case — but rather the account is generating ‘mixed’ content.

Bans are a primitive form of isolating a part of the graph. Particularly if they extend to commenting/replying to that account’s posts.

The false abuse reports similarly should carry an extremely high cost to the submitter. If an abuse report is flagged maybe the account is no longer trusted ever to report a post again. Maybe an abuse report should actually have to carry some monetary value (like hashcash).


Yes it bothers me very much, if they wanna cripple someone's account why not just admit it?They are the superuser moderators, who's gonna go against them?

It’s fundamental that some members of the public are problems. By prohibiting the content but keeping the people around, they are curating a culture where predators are allowed to roam on the platform as long as they don’t cross a specific line.

They do this because acknowledging and dealing with bad users would also mean impacting fake users that make them money with ad fraud, growth metrics and ginning up engagement.

End of the day, if you operate a social platform, and you have people uploading child pornography and animal torture video, you should do everything in your power to make the user go away, for good.


That's beside the point. If your classification system isn't good enough to use on celebrities, it's not good enough to use on regular people either - bans are just as annoying for them, even if they have less voice to complain.

Ultimately, politics aren't the issue. Not having clear, consistent & enforced rules as well as no consequences for breaking them is the problem.

People aren't encouraged to think twice before they post because there's not going to be any significant consequences for breaking the rules.

Even if you somehow manage to get permanently banned from a social network, it's very easy to come back; it doesn't cost anything besides spending some time creating a new account.

From a business perspective it makes sense - why would you ban an abusive user that makes you money? Just give them a slap on the wrist to pretend that you want to discourage bad behavior and keep collecting their money.

Proper enforcement of the rules with significant consequences when broken (losing the account, and new accounts cost $$$ to register) would discourage a lot of bad behavior to begin with.

You could then introduce a karma/reputation system to 1) attach even more value to accounts (you wouldn't want to lose an account it took you years to level up and gain access to exclusive privileges) and 2) allow "trusted" users beyond a certain reputation level to participate in moderation, prioritizing reports from those people and automatically hiding content reported by them pending human review (with appropriate sanctions if the report was made in bad faith) to quickly take down offensive content.


It is absolutely bizarre how many people I know that think social media companies should be able permanently ban or suspend users for literally any reason they want with no possible recourse that simultaneously think these companies shouldn’t be able to reclaim usernames without financial compensation.

The problem is not that users aren't following the rules. The problem is that enforcement is, shall we say, very selective. To give but one example, Mitch McConell's account just got banned for posting a video of a protest outside his house.

I mean, it’s still pretty bad and these accounts(in the examples) should be banned and owners prosecuted if applicable.
next

Legal | privacy