Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

It's not a problem that there's a different technical solution to high-profile users. There's no problem with FB hiring more (or more skilled) moderators for higher-profile users.

The problem is when the rules are not applied evenly, especially when high profile users with greater audience can abuse those rules.



sort by: page size:

I agree that leveraged manual moderation is the future, and there is no way around it.

The problem for Facebook is that the requirement for more moderators grows roughly linearly with the userbase, but the attractiveness to abusers grows exponentially.


Facebook also obviously thinks of themselves as a tech company first and foremost, so the question of moderating and scaling moderating is considered almost solely from the standpoint of it being a technical problem.

The problem is a social and human one. Newspapers have traditionally handled it with editors and people and human policies. Facebook isn't inclined to explore editors and people and human policies, not because they don't scale or we don't know how to scale them (Wikipedia has scaled relatively okay, for what that is worth as an example; Newspapers have used editors and researchers for a long time), but that it's not an interesting technical problem to scale people.

Facebook even had people hired in some of these moderation roles for a while, but it was sexier for them to use essentially blackbox algorithms that turned out were easy to game by malevolent entities. Now they only seem to question "fixing" the blackboxes rather than finding solutions that use smart or trained people.


Unpopular opinion? Facebook moderators are not the "heart of the business". Now, they should get better treatment but let's be realistic here.

The issue brought up by this moderator seems to match what we've heard and read from others, which puts Facebook, and sites like it, in an unfortunate position: Content from a site like Facebook cannot be moderated without causing harm to human moderators, and A.I. is not yet able to do the job.

That's a massive issue as lawmakers are pushing for more moderation. It's my belief that Facebook, and others, have sold our politicians on the idea that moderation can be done, at scale, with AI. Those of us who followed along knows that the human moderators have complained that at least Facebooks A.I. simply isn't up to the task.

I don't think Facebook and similar sites can exist, not at that scale, without massive human cost.


It's also worth remembering, Facebook has about 15,000 paid moderators. Because they print money they can just solve the moderation problem with brute force. It's pretty funny when you think Facebook employs twice as many people just for moderation than Twitter employed in total at its peak, obviously the moderators are cheap, but still funny.

Same way you enforce any other rule. Same way they already enforce the rules they have?

I hope you weren't hoping for an argument about slippery slopes. And yes, of course that means they may have to hire more moderators and of course that means more expenses. Who ever said Facebook should get away with no moderation just because they're successful?


The real solution is probably hybrid (i.e. AI-assisted human moderation).

I've assisted with moderation for a bunch of communities over the years, and IMO the problem isn't that there's not enough humans — for every bad actor in a community there are many good ones — but that Facebook doesn't provide good actors and moderators with anything beyond the most basic tools imaginable. Had Facebook invested even 1% of their resources into this, I don't believe that Facebook would be the raging culture infection that it currently is.


Social network moderators are almost always low-paid, low-power, low-profile employees. It's not a great job. It's a high-volume job, like an assembly line. The highly compensated Twitter and Facebook software engineers are not doing the content moderation. They don't have the time, and they would run away screaming if they had to do it for an hour. It's likely that a lot of this work is even outsourced.

It's a problem with scale and incentives. At their scale, hiring people like Dang is not going to be particularly profitable, since you'd need thousands or tens of thousands of them to moderate the platform (and even then, it gets difficult since you really don't want too much moderation for private messages among family members and friends, companies, etc).

Add this to the difficulties of finding people that good in the first place (even more so said people willing to work for a large company and moderate at scale), and you've got a bit of a problematic situation:

- If you try to hire 'quality' moderators, then you likely spend millions of dollars a year on moderation, without any real guarantee that the moderation will actually be good (since different communities and groups disagree on it, cultural differences come into play, etc) - If you try to outsource the moderation to some low cost region, then you get poor quality moderation due to cultural differences and a lack of understanding of the subject matter and community. - And if you try to automate it, you arguably get an even worse situation, since false positives and negatives are everywhere and often there's no way to appeal the situation when they get things wrong.

Then things get even worse when you realise that different communities interact and have different standards...


Facebook’s 30 000 moderators is problematic as it’s a pit of human misery, with basically fresh souls sent down the mine as canaries to detected hurtful content.

Even looking away from the human aspect a second, it means these 30 000 are not stable employees but contractors needed to be replaced very frequently, burning through the pool of potential workers. And people are not even happy with the current number and clamour for way more moderation on facebook, so the 30 000 number is by cheaping out.

Then Youtube need people both for moderating and copyright/monetarization support so multiply by 2, and for facebook and youtube we’d need close to a roster of 100 000 people rotating every year or less, burning through a million people in a decade.

All of that just for two video platforms on the internet.

I can’t stop myself from wondering if it’s a good use of people.


I'm not suggesting that moderation is easy to automate. I'm saying Facebook is an advertising company that cares about content moderation exactly as far as it impacts their advertising business and no more.

They have no concern for the moderators or the content, they just want to solve this problem (as it relates to profit) as cheaply as possible. At this point in time they believe the best way to do this is by utilizing low-cost labor.

Facebook could easily devote more resources to caring for these moderators but they don't because doing so has no further positive impact on the bottom line.


Facebook already employs 15,000 content moderators, both directly and via contractors. By many accounts, they're overworked and don't get enough time to deal emotionally with what they have to see [1] -- and they only review a fraction of the content on Facebook. 125 people would clearly be many orders of magnitude too small to review everything.

The "hundreds of thousands, and perhaps millions, of false positives" quote is referring to the rate after human moderation (and is also just speculation by the author).

I think people tend to underestimate the scale that would be required to solve this problem because they tend to underestimate just how terrible people are.

[1] https://www.theverge.com/2019/6/19/18681845/facebook-moderat...


In my opinion, this is as strong AI problem, and only people have strong AI. Therefore, large social sites are always going to need human moderators to have a good signal to noise ratio.

Moderation is labor, and you get what you pay for. Given how notoriously mental health threatening Facebook moderation has turned out to be, adding unpaid moderators with no health care benefits and especially no mental health protections seems the exact wrong direction for Facebook.

That said, I absolutely agree that the answer is likely one of checks/balances/as much transparency to the adjudication process as possible, and yes Facebook has a lot to learn there from existing governmental technology.


I think the point is that, if I as an activist am aiming to maximize the number of content moderators treated well, the highest leverage way to push for that is getting people mad at Facebook. There may be a preexisting system of unpaid content moderators who are supported even less, but that's much harder for anyone to address.

I read your comment a bit like a defense of Facebook, saying "you know, when you scale to millions of people, it's really hard and expensive! Cut them a bit of slack"

I don't disagree that these problems are hard when you scale to millions of users. Just pointing out that Facebook were the ones who chose to scale to millions of users before their moderation systems were ready.

For what it's worth, thank you for doing your part in helping to fix the system.


Manuel moderation is not the solution. The issue is FB (insta) is the one moderating. Give moderation powers to the users, create a market for moderation. Some people will want whitelists, others AI screening models, others “manual moderation”. Let users decide what they want.

Isn't it well documented that the human moderators who do exist at places like facebook and youtube end up having their mental health screwed after they watch so much disturbing content?

Just throwing human moderators at the problem probably has more cost than just the salaries involved.


We can look to China at what effective moderation looks like and then learn that FB would need 2-3 orders more moderators to make it ‘safe’
next

Legal | privacy