They spend a lot moderating because their platforms largely allow anybody to spam/harass anybody else with no barriers. I suspect the lack of compartmentalization like Reddit or Mastodon is because of money. There is greater engagement with fewer barriers but more potential for abuse.
It's honestly not clear to me that many of these companies can afford proper moderation. Twitter's revenue is about $1.20 per user per month. Facebook's is about twice that. Proper moderation is expensive, with each incident requiring significant time from one or more smart people with native fluency and cultural understanding plus deep familiarity with the platform rules and all the tricks bad actors will try to play to get moderators to do the wrong thing.
And that community voluntarily performs most of the moderation for free. I believe they would have to start paying moderators if the cost to use the platform rises substantially. It would no longer feel like a community and instead feel like working for free for a big corporation. Twitter was barely profitable when Musk bought it partially because of the cost of thousands of moderators.
They do, but that's kind of my point. Libraries full of articles have been written about the negative aspects of these services. Many perfectly accurate, no disagreement here.
But think about it. We have networks in which we can connect to any other connected human being. Post near-unlimited amounts of content which is reliably preserved. We have well functioning-apps and reliable services, and relatively clean content. For the price of: 0$, and ads.
What I'm getting at is that all this stuff that we take for granted, isn't a given or free when you scale Mastodon. It costs real money to run instances which hampers growth and sustainability. There's a steep human costs to moderation.
Mastodon doesn't have a lack of moderation, in fact, it's architecture tends to segregate people out into instances by interest, making it easy to moderate as long as you keep your instance small (most of the really bad stuff will be on a handful of instances that allow it, so you can just block them and only have to deal with the minor problems on your own instance). Mastodon also tends to have one or more moderators per instance, so it probably has a lot more people with moderation privileges than a big centralized company, which can only afford to hire so many people in the call center to handle moderation.
Moderation absolutely can scale, platforms just don't want to pay for it. For two reasons:
- Moderation is a 'cost center', which is MBAspeak for "thing that doesn't provide immediate returns disproportional to investment". For context, so is engineering (us). So instead of paying a reasonable amount to hire moderators, Facebook and other platforms will spend as little as possible and barely do anything. This mentality tends to be enforced early on in the growth phase where users are being added way faster than you can afford to add moderators, but remains even after sustainable revenues have been discovered and you have plenty of money to hire people with.
- Certain types of against-the-rules posts provide a benefit to the platform hosting them. Copyright infringement is an obvious example, but that has liability associated to it, so platforms will at least pretend to care. More subtle would be things like outrage bait and political misinformation. You can hook people for life with that shit. Why would you pay money to hire people to punish your best posters?
That last one dovetails with certain calls for "free speech" online. The thing is, while all the content people want removed is harmful to users, some of it is actually beneficial to the platform. Any institutional support for freedom of speech by social media companies is motivated not by a high-minded support for liberal values, but by the fact that it's an excuse to cut moderation budgets and publish more lurid garbage.
I sait on another comment on this article that being a mod allows you to strongly astroturf and control the narrative. From there, and having loads of bots, allows you to manufacture consent or dissent. And the moderation power allows you to remove what you wish not to address.
It's sheer power. It's not about the money, per se... But those with power get money, and those with money seek power.
It has nothing to do with good, in most cases.
(And yes, I'm a moderator of small groups. I just remove spam and malware.)
Good moderation is labor (and likely forever will be) and Mastodon by its very nature has a lot higher moderators per consuming users ratio than Twitter ever had (even on messages originating on and "not leaving" Mastodon's biggest instances). (Just like if you are evaluating school districts you want a higher teachers per students ratio.)
They don't have the resources to take on that much moderation at scale. Maybe given two years and a big tech budget or six months and a big moderator budget.
The problem is the costs of content moderation are not linear. You are not dealing with a few thousand trolls. You're dealing with bot farms impersonating possibly over a million accounts. Huge groups of networks operated by just a few dozen people.
Automating that away is the only path to being on equal footing. If you introduce any human element, not only will it be a bottleneck, but the cost could be large enough to bankrupt even the largest companies.
Decentralized networks lend themselves to far better moderation than centralized. Private forums have always had better moderation than something like reddit. Well, not all, but you can ignore those ;)
Part of the point, as I understand it, is that moderation scales more-or-less linearly because each instance is ideally human-scale (that is, maybe having hundreds of members as opposed to millions) and responsible for its own moderation, unlike sites like Twitter or Facebook which have to pay a relatively small team of moderators to moderate millions of people's communications.
There's nothing saying any particular instance will moderate, but if they don't and it becomes a problem they'll quickly find themselves cut off from federating with other instances.
It's also worth remembering, Facebook has about 15,000 paid moderators. Because they print money they can just solve the moderation problem with brute force. It's pretty funny when you think Facebook employs twice as many people just for moderation than Twitter employed in total at its peak, obviously the moderators are cheap, but still funny.
Surely the Mastodon team are very concerned with moderation? Given that everyone agrees it is vital for the success of the project (or any social network), why would they consider it outside their purview?
On the contrary, this is one of the few areas where decentralized systems are much better than centralized ones.
The problem is that moderation doesn't scale. Small communities and small instances will nearly always be better at moderation than a large platform, because they can rely on a small number of individuals to get the job done - individuals who are more likely to be consistent, more likely to pay attention to community reactions to posts, and more likely to care in general about what they're doing.
It's difficult for me to think of any large platform (Facebook, Github, Twitter, Reddit, Steam, Amazon, etc...) where moderation and general content quality is better than their smaller counterparts (Mastadon, Itch.io, Gitlab, self-hosted forums, etc...).
Large platforms also become large attack vectors. The same benefits of centralization for users - (content discovery, single sources of truth, and so on) make it very cost-effective and efficient to spam and harass users. With a decentralized system, spammers are less likely to care about whatever small community you're hosting. It's not impossible for them to crawl around the internet spamming everyone, but the cost of doing so is a lot higher than targeting a single platform.
So centralization inevitably leads to large platforms because the market for these platforms is winner-take-all, and it almost a kind of pseudo physical law that large platforms can not be moderated well. I don't want to be absolutist about it, but I'm really having a hard time thinking of even a single exception to that rule.
Maybe Wikipedia, but I'm even kind of doubtful about that - and most of Wikipedia's quality moderation comes from a group of people who are completely obsessed with the project. That mindset also doesn't scale to Twitter/Facebook sizes.
If they weren't adversarial to the moderators they would seem them as their golden goose. They've done what other social media platforms haven't and solved scaling moderation without having to pay the moderators.
It's probably simpler than that. Their earnings are proportional to click-baitiness, so they optimize for that. Moderation is probably more akin to a water pump on a ship: its purpose is to keep it afloat. Free speech is probably just passing regulatory nuissance that moderation has to account for.
I'd like to call Occam's Razor, but, then, when has it become that the most selfish interpretation of an organization's motivations is the simplest one? Maybe it's simple simply because it aligns with my beliefs. Something to ponder.
reply