Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login
Catastrophic effects of working as a Facebook moderator (www.theguardian.com) similar stories update story
113.0 points by seek3r00 | karma 503 | avg karma 7.08 2019-09-17 12:22:37+00:00 | hide | past | favorite | 101 comments



view as:

If you think about it, you have a large group in society who spend 8 hours a day watching content so controversial it won't even reach the rest of us. Just like any other company, people eventually quit, and new people join. Now if you were the bad guy wanting to nudge a % of society in a specific political direction, wouldn't this moderator group be a perfect target? Just bombard [insert social media platform] with propaganda content and you _will_ reach this group of people even if the content never appears on the platform. What a messed up situation.

Not really feasible; a group of a couple thousand converts are useless if they aren't targeted at a specific region; with Facebook's love of outsourcing you can't even be sure they are in the same country.

The situation might not be so much better on other services of that scale (Youtube, image hosters,...) but what really makes the difference with FB for me is that it reminded me of a tweet from a FB employee who claimed that the amount of good that Facebbok does (might have been 'can do'?) is unlimited. Unlimited, ffs? I can only wonder how it comes that self-perception and reality can diverge so enormously.

(My eternal gratitude to the one who can dig up the tweet. All my efforts were futile so far.)


You're doing the same thing by claiming your opinion is "reality".

I think this is a false equivalence. His claim is the stronger one in my opinion.

I think his opinion is far stronger. Facebook is a far better company than apple. At least they dont hate poor people.

How crazy I sound to you is how crazy you all sound to me.


What have I missed about Apple abusing the poor?

(I assume you're not just claiming that selling expensive products = hating poor people.)


I see apple as: 1. The biggest hypocrite in all of technology with regards to monopolistic tendencies. Unlike Google and Facebook, Apple is loved by the media, so noone calls them out on their shit, bringing me to my second point:

2. While other firms like Google and Facebook have served to bridge the gap between the rich and the poor, between the first and third world, apple has done exactly the opposite.

I, and large portions of the non-western world see Apple as a representation of american power and western influence. And that has had alot alot alot of benefits for Apple but it also made me hate their guts so yeah.


A company considering its users its product can't get much lower IMO. Remember that time they experimented on users' emotional states without any consent or informing? [1] This is when they started to show their true cards. To me FB is on its own level far below even oil companies and big banks, probably around the same ranking as PMCs. You'd be far pushed to find me something worse.

Apple is no saint but FB's total disregard to its responsibility and position in the world is so much worse.

1: https://www.forbes.com/sites/kashmirhill/2014/06/28/facebook...


Not really when that service is offered free of charge. I am pretty sure most people in India, Africa don't give a shit that Facebook uses their data to serve them ads. They and the world, gets an invaluable service in return.

I certainty don't care.


Facebook uses their data today to target ads.

What the data gets used for later by Facebook, the government or organized crime, when it inevitably gets stolen, is anybody's guess.


Apple uses their data today to create various products.

What the data gets used for later by Apple, the government or organized crime, when it inevitably gets stolen, is anybody's guess.


The irony of the parent comment and these three nested replies is that all four are "just like your opinion, man"

In Portuguese, we have a word for this, "achismo", which is a play on the word machismo and the verb achar, which means "to think" as in "I think ...".

It's all opinions all the way down.


Never heard that word, and Portuguese is my native language...

One idea: To cope with all this, reduce the flagged posts presented by the algorithm to maybe 2 - 3 hours a day. For the remaining 5 hours, the algorithm could show them chats, pictures and videos that are harmless and uplifting. This might create a counterbalance, showing that there is still something good in social media.

I think that's how RL works in most countries. You are not running through the streets, getting slapped with crime and therelike all the time.

Of course, this also means you need more people to deal with the same amount of content as you do today.


I think you might be on the right path, at the same time mandated "happy picture time" seems like it might be it's own weird world to live in.

Moderator1: "Hey man how is it going today?"

Moderator2: "Pretty good, it's kitten day."


That sounds pretty surreal, I agree.

I appreciate the idea (and I've personally administered a dose of /r/eyebleach on lesser occasions), but institutionalizing that sounds to me like something out of a dystopian future story, not a decent solution.

Maybe a better solution starts with reducing the scale of the problem, and includes measures like not tolerating so many fake accounts, being willing to lose real users permanently when you ban them, and cultivating better culture.

And at least pay the moderators better for their trauma, rather than exploit people who have no better options. New-grad programmers get six figures, to entice them to work for a 'social media' company they know is a bit sketchy, but some of the people who bear some of the darkest side of the business get paid peanuts.

Alternatively, one could change the business and infrastructure models, to distributed and interoperating common carriers (away from trying to snoop on, and manipulate, things people say, hear, and think). ISPs and protocols and programs, like we briefly had. But that requires no one becoming a billionaire by grabbing power over people.


So you're paying them for 2-3 hours of real work a day, and then another 5 hours of fake work? Why not just pay them only for the real work and give them the rest of the time off?

Because you would leave them alone with the impressions of the day - just because you only show them bad content for 3 hours, doesn't mean that they will automatically create a counter-balance in the free 5 hours.

I force them to watch the bad side of humanity, so I also "force" them to watch the good side of it.

It should be subtile, of course.


I wonder what the solution is here.

To expose people to this stuff continuously seems wrong.

Then again so does exposing everyone to it / would probabbly kill the service if it wasn't dealt with by someone.

Another concern is finding people who can handle these situations in a healthy way, might be few and far between, and generally the folks exposed to it folks hired into low pay / outsourced warm bodies in chairs kinda situations.


Keep things small and community-moderated. Keeping social networks self-moderated in a small community, say a city, would quickly ostracise people sharing this sort of thing.

Community moderation is a pretty amorphous concept... not really sure how that would work out.

Reddit is a pretty good example of it both working, and very much not working.


I mean physically small. In my view much of the mess the internet is seeing right now is because of the lack of local/real-world consequences. Links between people are now on a global spider-web scale, so to say. Whereas it used to be on a local neighbourhood-city level scale. This gives way to a separate space decoupled from your local environment. If a community is (relatively) tight-knit on a physical scale sharing repulsive content or blatant propaganda could have a real-world consequence and definitely wouldn't get the reach it has right now.

It's not the pipe dream of what the internet was supposed to be back in the 90's, but I think it's safe to say that experiment has failed. Humanity has shown itself not to be able to behave itself enough for this.

I'm sorry if this sounds abstract or convoluted or anything, I've had a long day and am a bit brain-fried but I have thought pretty extensively about this. I'm having a bit of a hard time putting it down in a HN comment. Maybe it's worth the effort writing this up.


I've done a lot of community moderation in my life. It's been my experience that making things smaller and replacing professional moderation with volunteer moderation is rarely as effective as might be hoped for. Further, once you get past Dunbar's number (about 150), you get the semi-anonymous effect that enables abuses of social networks, much like real cities. There is also a potential wrinkle where people can easily migrate from one forum to another.

Taken together, unless you keep people from migrating from one petty dictatorship to another to find one they like, the outcome will tend towards a small number of big megalopolis-style communities and a middling number of mid-size ones.

It's not a bad idea by any means. It's essentially a mirror of an idealized pre-industrial agrarian society. There might be some wrinkles added by basic human behavior patters that could be worth considering carefully is all.


What if the answer is to kill social-media-as-we-know-it?

There are some sick and "extremist" people in this world. If their stage were relegated to email threads I imagine we would not be having this discussion.

But I think what bothers me the most, and this is eluded to in the article, is not that some extremist is posting crap, no surprise there. But that seemingly once-normal relatives of mine are consuming this crap and then reposting it — becoming the extremists themselves.


> What if the answer is to kill social-media-as-we-know-it?

The answer obviously is that.

But the questions is, what if we can't?

(Spoiler alert: we can't.)


> What if the answer is to kill social-media-as-we-know-it?

This is the real answer. If running a service requires paying people to look at awful, damaging material—stop running the service. It shouldn't exist. It doesn't need to. Absent things like Twitter and Facebook we'd find other ways to keep in touch with people who matter. Special Email clients that do most of the friends & family stuff that FB does but over email or whatever. Some new protocol. Progress on that kind of stuff only stalled because we've got the spyvertising- or VC-funded, moderator-harming services to compete with. Take those away and the void will be filled, with hardly a hiccup.

"Well gee whiz we just can't run our service without doing this to our moderators, so I guess we have to"—no, you don't, period, at all. You can stop running your service, or live with hurting people and stop acting like you care.


I would suck as a CEO. The first time someone live-streamed an attack/rape/murder on my platform I would pull the live feature.

"What if the answer is to kill social-media-as-we-know-it?"

I know someone who was fine until he had to help select/edit news footage (unfiltered audio/video footage of wars etc) for the BBC. After a few months he needed counseling for intrusive images, lack of sleep, depression. So, no news either?


In the long term, the solution would have to be real consequences for those posting it.

I recently met a guy in Berlin who is employed by a company that is a FB sub-contractor for content screening. When I met him, he had been on sick leave for 2 weeks, because of the awful working conditions and the stuff he had to go through daily. He was obviously looking for a new position and, I can attest, he looked like "damaged".

Maybe you need people who are just numb to it? It seems like the kind of gig that would attract the kinds of people who like seeing it. I know that's morbid to think about but look at subreddits like RIP /r/watchpeopledie, /r/enoughinternet, morbidreality is still around luckily. You get people who don't mind the gore and terrible things, pay them to sift through it, and see what the outcome is there?

I'm fairly sure the content FB mods have to sift through makes those subreddits seem like a walk in the park. Even I have seen some horrible things back in the earlier days of the internet that makes those look tame.

There is no upper bound in repulsiveness for the things FB mods watch. At least with Reddit there's the sitewide rules.


Maybe for gore. For child abuse images, not so much.

I can only imagine the aghast looks one would get from the PR and HR people, just for asking about forming a team of people who liked looking at child abuse images!


The conversation might go better if you framed it as paying the members of that team less than they'd get in comparable jobs with less child abuse.

Not necessarily being numb to it, but surely you would need something to ground yourself with, should the need arise. Certainly there is always something that tends to stick, even if you pretend that it doesn't phase you.

I think realization of the fact that the internet connects everyone and no matter what you espouse, someone will call you names for it, can indeed help with emotional responses to content.


Even people who frequent these subreddits don't use them from 9-5 without pause and they still have moderation going on.

Imagine seeing the worst of these subs for 8 hours with only a short lunch break.


>They also said others were pushed towards the far right by the amount of hate speech and fake news they read every day.

There's something wrong with mainstream reporting if mere exposure to social media turns people far right. It really strikes me that people are trapped in some pretty strong filter bubbles to the point mere exposure is enough to change political belief.

Spend a week on a far right community and you'll be shown more stats that point to a far-right conclusion than you can critically evaluate. In any internet discussion of police racism for instance FBI crime stats will be mentioned in a heartbeat but I don't think I've seen a mainstream journo bring them up once. Social media and mainstream media fundamentally follows different schemas of information simply because even bringing up certain data can cause a mainstream journo reputational damage.

This is also causing an inverse filter bubble where hateful ideas which actually have refutations don't get refuted because people refuse to discuss the ideas on principle. Much of the data cited is crap and much of the interpretations are crap but they're not meaningfully contested.


I think it's more that there are personality types that really want a community feel like they belong to and are thus easily persuadable. Social media was a huge factor in radicalizing people to Islamic terrorism, and that ended when the networks started censoring that stuff.

> There's something wrong with mainstream reporting if mere exposure to social media turns people far right. It really strikes me that people are trapped in some pretty strong filter bubbles to the point mere exposure is enough to change political belief.

A different conclusion to draw from this is that far-right interests are responsible for the majority of the objectionable content on social media. One might further suggest that said content is deliberate propaganda, designed to push people to the right, and that this is a central pillar of their strategy that isn't shared to the same degree or extreme by other political factions.

This isn't "mere exposure". I haven't read the article so please correct me if I'm wrong but this is a job, a place they go to sit every day to be bombarded with this crap. To some extent they have to sit and let it wash over them—I don't imagine the people doing these jobs have much career mobility. IMO it's not realistic to suggest that if they were just better-informed, they wouldn't suffer these effects. The mind is not an inviolable fortress—no matter how strong you think your defenses are, they can be worn down.


>One might further suggest that said content is deliberate propaganda, designed to push people to the right, and that this is a central pillar of their strategy that isn't shared to the same degree or extreme by other political factions.

This is certainly part of it. I've literally watched Russian propaganda around the Syrian war featuring a "Canadian independant journalist". Yet overall this explanation strikes me as unsatisfying. From what I've heard from the researchers propaganda techniques have been less about promoting the right wing specifically and more about spreading social discord generally. The Russians had efforts to promote Jill Stein that were promoting left wing rather than right wing propaganda. I've never really bought that propaganda efforts weren't getting overwhelmed by the influence of regular users generally. It's possible but I haven't seen serious attempts to prove it.

>I don't imagine the people doing these jobs have much career mobility.

This is actually a more interesting criticism to me. Perhaps internet moderators are more drawn to far-right thought than others because of their life circumstances? This is a population that's lower income and tech savvy.


I'm right-leaning and I see extreme amounts of propaganda backed by false information on the left side of American democracy while the right merely questions beliefs with statistics, facts, and law. I've never once seen any kind of propaganda that was right-leaning use false information. In fact, the right typically debunks left-leaning news reporting by finding more detailed information on the news at hand, while the left usually cuts away bits & pieces of information from their stories that would otherwise disprove their narrative. Also, I've noticed that left-wing news uses psychology to manipulate their audience by victimizing obviously crooked politicians within their stories to make them look like good people when they've been stealing from the public for decades.

Liberals DESTROYED by FACTS and LOGIC, eh?

You won't catch me defending anything so broad as "the left" but your statement that "the right merely questions beliefs with statistics, facts, and law" is laughable. Literally I am laughing at the absurdity of it, that anybody would say such a thing. You needn't look any further than the American right wing's top man, who e.g. routinely uses public social media to broadcast trivially falsifiable statements, dubious and unsourced "statistics", and other total nonsense.

You should seek to expand the boundaries of your model of reality.


There is a quite a big difference in magnitude between "mere exposure" and "eight hours a day, working nights and weekends".

I honestly would be surprised if any propaganda (no matter the subject matter) left no impression when applied with that intensity.


I took it more as an extrem application of : https://en.wikipedia.org/wiki/Illusory_truth_effect

> In a 2015 study, researchers discovered that familiarity can overpower rationality and that repetitively hearing that a certain fact is wrong can affect the hearer's beliefs.


Yeah but this phenomenon isn't unique to social media. Mainstream media sources and alternative media sources beat the same arguments into the ground enough times that you end up assuming validity.

I can make the inverse argument that being told an argument is so offensive it's not worth examining creates this bias in kind. In fact I would argue the point of opposing "hate speech" is not to rationally confront it but to denormalize it so it doesn't affect people's beliefs. Same goes for anti-heresy beliefs, "conservative-only" posting requirements, and many other forms of censorship. I don't think far right communities are unique in the amount of social conformity demanded from members.


I see what you're saying - when mainstream outlets consistently hide swaths of current events, statistics, and long-form analysis from their front pages or even the back pages out of fear of giving airtime to "Republican talking points" the result is that independents and liberals who ignorant of that information are susceptible to the spin of the first person who presents it - whether that person is a moderate or an extremist.

I quite like the modern trend of "fact check" articles. Which is to say editorials with references that give lip service to non-partisanship. It's incredible really journalisms continued reliance on reputation and disdain for references.

I think if this trend continued it would be effective against the weak arguments pushed by far-right boosters.


There are many aspects to this, but it seems that the most lasting and strong effects are due to visual content (especially raw content, violence...) rather than text (even though text can be violent).

Which is why automated Image/Video moderation solutions (such as Vision, Rekog, Sightengine.com, Hive) will continue to grow. Not only because it is cheaper/faster, but because it becomes a necessity. Or at least as a first filter to weed out the "worst" content.


We just saw Tumblr try that and discover that trying to automate it can destroy your platform.

The problem is that context is even more important in visual content than in textual content, and we still don’t have any algorithms that can parse context as successfully as humans can.


it seems like you would essentially need a general ai to detect stuff like cp. there are (non-digitized) photos of me taking a bath as a small child that were taken by family. if I stumble across one sifting through photos at my house, it's an innocent document of my childhood. if theyre being passed around some cp forum, it changes the nature of the images quite a bit. we're a long way from having an algorithm that understands why it matters who is holding a photo.

At a $previousJob I had some tangential contact with professionals who track child pornography, trying to identify and free the kids (people involved in catching https://en.wikipedia.org/wiki/Christopher_Paul_Neil). They felt that automation was of little help for what they were doing, and that every image had to be looked at by at least one human (most of the images by more than 1). They had a few tricks (apparently looking at the image in B/W helped lessen the trauma) but they did not find value in the automated tools we tried to build to help them.

Now, they felt much more empowered than what Facebook was doing: they kept going because the goal was to stick cuffs on the wrists of the guys who were doing this, and get those kids away from him, and they could put up with all of the rest for that goal. They were treated as rockstars by the rest of the people they interacted with, because they were the ones who got kids away from the predators. They had frequent opportunities to take breaks and could set their own schedule, with only the guilt that came from the longer they delayed, the more time passed with the kids in the predators hands to drive them.

Ultimately, feeling empowered to make a difference in the world is key, and if Facebook treated screening as an important job and gave their moderators more power to set their own working conditions I suspect that it would improve their mental health by quite a bit.


Good point about empowerment.

I hope they are investing in an army of shrinks /psychologists/sociologists to study,improve and supervise these centers, cause this stuff is not going away by just deleting content.


Hm - honestly, I'd wonder about the motivations of the sort of person who signs up for that type of job. Especially the ones who are saying "no, no, no need for automation, we have to look at each image individually..."

That seems like a bad faith accusation with no evidence or proof for an organization focused on catching child predators.

I used to take down pedofile rings online with a few associates. This was strictly a black hat endeavor and I never had to look at anything. My motivation was that its fun to use these skills to ruin someones day (or life in this case) but it's only really moral to do these sort of things against people like that.

I once found a confirmed child porn possessing target with a smart home I was able to access. That guy must of thought he was in a black mirror episode.


They told us that robots would save humans from doing dangerous work in hostile environments. Who knew the danger and hostility would be entirely psychological!?

Yet another reason to stop trying to have a central authority, in this case Facebook moderators, police speech. It can't be done effectively or without major side effects. Let people filter on their own, we all do every day in the real world and it works just fine.

Is your argument that any content platform that allows posting by the general public (vs say, employees) should be obligated to carry all the content that is posted?

Because that's where you end up if you don't want any filtering.

The next step after that is that no one allows posting by the general public anymore.


You are a lot less likely to run into violence, gore, exploited people, etc. in person. The answer to, "human moderators get hurt by the constant stream of truly awful stuff." is not, "let the stream of truly awful stuff just go straight to everyone".

society will offer better moderation than the network because a segment of the moderators will seem out the content and hunt the users publishing the indecent content and destroy them exposing them to their immediate network for the evil that they are. Let the battle play out naturally and nature will provide the resources.

the arrogance of Facebook to think they can hire employees to drink from the fire hose of content to solve moderation is glaring.


Will not really work for the use case of private chats such as :

> We have rich white men from Europe, from the US, writing to children from the Philippines … they try to get sexual photos in exchange for $10 or $20.


If they want to target illegal activities such as that, I say go for it. Otherwise for things we simply don't like, provide users some filtering tools and leave it at that.

It is way cheaper to flood your internet content sources with beheading videos or videos of live kittens being tossed off highway overpasses than it is to do the same “in the real world”. The internet would quickly devolve to 8chan...

Be careful what you wish for.


They could use some image processing to make the videos look cartoonish, at least for the first analysis.

That might soften the blow to the screeners.


There seems to be a fundamental hypocrisy in a legal system that considers some material is so fundamentally corrupting that it must be illegal for people to possess or see it.

If this were truly the case, it should also be illegal to compel someone to look at this same material as part of their employment.


I wonder how you look at vice-squad members?

And from what I've seen they are quite happy to receive tips about abusers from the public and businesses that care enough to review their content. For society to work some of us have to step forward and agree to do the dirty work.

That some people are doing this for a living without proper counseling and guidance is another matter, you can lay that at Facebook's door. The few times that I've been exposed to that crap was enough to make me shut down some services.

In fact, I think that any business that deals in user generated content should assume the cost of business that goes with that and have a system of flagging and escalation in place.


> I wonder how you look at vice-squad members?

It was a poorly formed argument, but you've hit on the absurdity that I was trying to point out. Obviously somebody has to look at this stuff.

But having low paid, poorly trained people do it all day, every day? I don't think it's crazy to suggest that shouldn't be legal.

And if Facebook can't exist in its current form without these teams, then I'm totally fine with that.


Just to be clear: There are no statistics in this article, just some anecdotes selected to push a particular narrative.

I don't claim to know whether working as a FB moderator is 'catastrophic' or not, but this article makes an emotional case for it rather than a rational case for it. It reminds me of reporting around the Foxconn suicides - sure, each is a tragedy, but it turns out the Foxconn suicide rate matches the national rate.


Yes it is an emotional argument. I used to work somewhere with content moderation. From my conversations with people on that team, the effects in this article are spot on, and I would argue are damaging. Maybe not catastrophic but journalists gotta make bank too.

Think of the disgust or shock you might feel if you saw, say an adult flirting with a kid, or if that’s not graphic enough, having sex. Now imagine you need to see that every hour or so for your job, 5 days a week, forever. There’s no end, there’s no stopping people.

At best you lose some of your humanity and innocence - nothing really shocks you any more. At worst it causes some form of PTSD or you even sympathize with what you are seeing. I can see how that could be less painful in short term than continuing to be shocked and depressed with things you don’t agree with.


From other articles on Facebook moderators, some have been exposed to things that are much worse (in my opinion) than your examples.

Worse than an adult having sex with a child? What would that even be?

There is a video floating around of a couple of guys from a US backed Syrian rebel group cutting the head off a 12 year old boy.

... glad I asked

"this is just pushing a narrative" is the perpetual bleating cry of people who want to ignore something that challenges their views

A particular narrative? Isn't it common sense that watching extreme graphic violence all day every day will lead to a lot of psychological issues for a significant percentage of people? This article about FB moderators in the U.S. paints a more detailed picture: https://www.theverge.com/2019/6/19/18681845/facebook-moderat...

"There are no statistics in this article, just some anecdotes selected to push a particular narrative."

Please don't reduce people to data like this. Plese don't automatically assume bad faith on the part of the authors of the article. Consider there are actual people involved here, reporting their experiences. Don't make it into some kind of social-justice battleground or science experiment, just consider their perspective without instant judgment.


You could say exactly the same about anti-vaxxer nonsense. The lack of statistics makes it something where you can only wring your hands and say "oh, how awful" and not "yes, but..." and consider meaningful alternatives.

There is a stark difference in the anti-vaxxers are directly advocating for dangerous policy changes. I don't see anything of the sort here.

The problem is that to someone on the fence about vaccination, they take legitimacy from articles that appeal to emotions and don't back up facts with data. Anti-vaxxers write the same way.

This article cites clear examples of psychological harm and trauma being inflicted upon Facebook moderators. That itself is sufficient data.

There was no assumption of bad faith, just pointing out a lack of logic and supporting evidence, and that we should take things with a grain of salt.

Appeals to emotion and innumeracy are serious problems in society right now and it's totally reasonable to identify when people pushing an argument are either appealing to emotion or being innumerate, so that others can plainly see that an argument lacks more substance than it appears to have.


" just pointing out a lack of logic and supporting evidence, and that we should take things with a grain of salt."

Given that I didn't see any specific plan of action or policy advocated in this article, I just don't see the need to jump out an immediately attack and discount it. People are so used to arguing in black and white, we suddenly have to start fighting over an article that reports on people's experiences doing a job that we all would agree, really, really sucks. Appeal to emotion, maybe. But krimeney, have some empathy.


These appeal-to-emotion articles written by people with an ax to grind make me this of this article I read recently:

https://theotherlifenow.com/depressive-capitalist-realism/


What statistics would you gather to prove this? Not everything in life is cheap and easy to quantify.

Some things are just stupefyingly obvious. For example, constantly looking at videos of puppies getting microwaved or people getting beheaded in front of their kids for 8 hours a day, 5 days a week just might really fuck with you.

Like, what data do you want to back that up?


The very fact that Facebook outsources its censorship to contractors underlines the precarity of a workforce that is considered both disposable in the short term and irrelevant in the long term.

Ultimately Facebook wishes to replace this workforce with AI automation where possible, and then heteromate the remaining human work by enticing users of the platform to inform on each other with posts that don't abide by the platforms implicit social norms or opaque moderation rules.


I spent most of last Friday keeping an eye on 4chan's /pol/ after finding out that morning that users were planning an attack on my job.

Even just looking for one day, it took a serious emotional toll on me. I've definitely seen some awful things on the Internet but the constant bombardment of hate speech, racism, anti-Semitism, and all sorts of disturbing images and text over the course of 6 hours made me feel physically sick a number of times, and I had to take extra care to rest the next day.

This is anecdotal of course, but I can't imagine what the Facebook moderators go through having to process at least one ticket a minute.


Or the 4chan janitors, for that matter.

Even worse than Facebook moderators, since they do it for free

OTOH: my understanding is that most (if not all?) 4chan janitors are recruited from 4chan itself, so they have a pretty good idea of what they're getting into.

The solution here may be for all of us to flag posts that we know are benign, like puppies or birthday greetings. This would reduce the percentage of disturbing content moderators are forced to see, and costs us nothing at all.

That's not a solution, that's just increasing their workload. It doesn't remove the root cause, nor the fact that people still have to assess a lot of content - which apparently the super smart AIs can't identify themselves.

What strikes me about this issue as a whole is what it says about the true state of "AI". This is a perfect job for such technologies. I mean how is it that we're already making 'deep fake' videos and audio but can't feed a video stream which is just a stream of images to an algorithm which can determine if it's inappropriate. I recognize that some such tech is being utilized on the front end in this case, and that the problem is non trivial, but I see this as FB saying 'good enough' and not pushing as hard as they could to improve the tech to where it can be trusted to make the decision. I sense that they may be telling themselves they're doing social good by 'creating jobs'. Why must humans be subjected to this torture? What happened to "move fast and break things"? Why not put the algorithms out front and let them have the final say, and let them learn and improve quickly? I suppose just because meat is cheaper than chips.

> I recognize that some such tech is being utilized on the front end in this case, and that the problem is non trivial, but I see this as FB saying 'good enough' and not pushing as hard as they could to improve the tech to where it can be trusted to make the decision. I sense that they may be telling themselves they're doing social good by 'creating jobs'.

This seems like a weird take to me. Why would this be your conclusion rather than that the technology isn't good enough yet?


Because the move to hire so many moderators so fast was a big expensive move that seemed like more of an implementation for PR purposes considering some of the troubles the company was experiencing. When your motto is 'move fast and break things', and you implement features like live video streaming without much thought to the full ramifications of doing so, it would seem even if only to me, that company might not be afraid to take a leap on tech that's 'not quite ready'. Again, I know such a hasty implementation could hurt quality, but given what's at stake (human health), they might get credit for doing the right thing. Obviously, though in our society, where they wouldn't get such credit, and would only get flamed for having a bad user experience for the wrong videos being taken down, the mental health if a few humans who willingly signed up is a small sacrifice for profitability.

Back in 1999 when I had AOL 4.0 at the age of 16, I would frequent www.goregallery.com and other various gore-related websites that showcased real crime & accident scenes from around the world. Still to this day, I am fascinated by that kind of content. I don't really seek it out like I did as a teenager, but it excites me nonetheless.

Legal | privacy