Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

> A lot of the issues FB deals with are social problems at scale. It's insane that any one company is expected to successfully moderate three billion people, they're doing the best they can with impossible constraints.

This is a problem that they've created at scale. In the US our closest thing to social issue moderation is probably the police (though please leave any efficacy discussion at the door) and there are 17 officers per 10k people in the US that make 67.6k/annual. Amortized per capita we expend about $114.92 for law enforcement in the real world - Facebook doesn't need a similar per-capita expenditure to keep its platform, but if you think it ends up spending more than 5 cents per user in amortized moderation costs I've got a Bridge in Brooklyn to sell you.

This social issue at scale didn't exist before Facebook - it might have naturally existed over time through the popularity of platforms like Disqus and disjointed forums. But Facebook owns the problem and nurtured it into the beast we know today. The social problems are an externality of Facebook's business model which they want nothing to do with fixing - it should be treated no differently than a paper mill polluting a river.



sort by: page size:

>> Facebook could employ thousands more to moderate and supervise - without destroying their ability to make profit

You're underestimating the scale of this problem.

Facebook users create billions of new posts a day. And, tens of millions of these posts get reported for moderation per day.

"Thousands of more" employees isn't going to solve the problem. Assuming each posts takes 10 minutes of labor to review, you would need an army of 200K individuals, and this amount of labor would cost many billions of dollars per year.

>> Pretending the problems don't exist, as all the platforms have done so far

You must be joking. Facebook has already made massive investments into moderation. It's their top priority for 2019. In many ways, they are doing the opposite of pretending this problem doesn't exist.

>> They - or any other social media company - don't have a right to exist.

Nobody is arguing that FB has an inherent right to exist. The only point GP made was that (as made evident by your comment) many people are underestimating the associated costs with manual moderation of content.


> Is it reasonable to expect Facebook to do all those things, compared to other choices that are consistent, neutral, and inevitably result in various type I and type II errors with respect to the nature and purposes of posts related to violence?

I’ll grant that we’re in uncharted territory as far as the scale and depth of what a company like Facebook has to deal with on a regular basis regarding the amount and variance in the content that they see and manage. That said, Facebook constantly tells us how they want to change the world for the better.

If that’s not just marketing bullshit and they want us to take them at their word, then they need to recognize that the position they’ve put themselves in demands a level of accountability and responsibility that they’re clearly not comfortable with. Ultimately, citizens should be able to define what’s reasonable to expect out of Facebook, not the difficulty of a particular content problem. Becoming one of the largest tech companies in existence doesn’t grant one immunity from real problems involving their content.

If Facebook can’t manage their content at scale to address these kinds of issues, then perhaps they shouldn’t be managing content at this kind of scale.


> The truth is some people just don't care

I don’t buy that. They either are ignorant (which is fine, until the point they become aware of it–I really doubt many people at FB are ignorant of the social issues it brings up), or they inevitably are aware and have rationalized why it is or is not an issue.

I’m not saying it is FB’s absolute responsibility. I’m saying it is the responsibility of each individual who works there. There’s a difference. Corporations are not people, the people in them are what make possible the goods or evils the corporations use to make money and thrive.

The pen is mightier than the sword. Social media is mightier than the atom bomb. People need to act accordingly. Morals are the impetus of action so I’d say they are highly relevant.


> As a corporation, Facebook truly seems to be trying to improve its behavior for the benefit of society at large.

No, it's not. Their social network is engineered entirely and unsurprisingly in support of their bottom line. A social network does not need to be a centralized, free-for-all like Facebook is, but Facebook is that way because that is what works best for their ad revenue. The rapid proliferation of disinformation and hate speech is a consequence of this broken system, but the company has always treated those very real problems as a necessary evil, a nuisance to be patched up with the least effort/cost as possible to keep the ball rolling. This does not benefit anybody but them.


> Facebook is not 100% fault here (in fact there are others in this space). The fault is that we don't have regulation against something that is clearly detrimental to society. Change that, and Facebook will become better (or be replaced by something better).

Treating Facebook employees and executives like some kind of force of nature that isn't responsible for its actions is really a bit weird. The people working at Facebook very much make these decisions and work with intent, which means they are at fault. They can be 100% at fault and we can still criticize the fact that we don't have enough regulations in place to protect society from companies like Facebook. Those two aspects are in no way mutually exclusive.


> The real issue is human nature.

So is theft, murder, and the intentional infliction of pain for amusement. You can't dodge responsibility that way.

You're also ignoring several aspects that makes FB different in kind than a lot of what came before it. As but one example, they are the world's largest blackmail machine. Maybe you trust Zuckerberg & pals now, what happens when they're gone? When FB goes in to decline? When an internal threat hacks them? When Singapore or Russia or worse approaches FB systems folks with carrots and sticks? Has that happened already? How would we know?

I'm personally of the opinion that it was a terrible idea to build it in the first place.


>Are we supposed to stop complaining about Facebook's negative impact on the world because their business model makes it impossible for them to resolve those problems?

You can always complain, but I find it akin to complaining about telephone companies because people use telephone services to plan and perpetrate crimes. Or knife manufacturers because people kill others with knives.

Facebook is a tool, and turned out to be a pretty effective tool. As with many effective tools, there are bad uses for it.


> I don't blame Facebook (or Twitter, or any particular company) for this.

Perhaps I would agree with you that I don't blame Facebook for the root problem here. After all, it's been around a lot longer than Facebook.

However, Facebook is actively providing the scale you're talking about. They are an accelerant. Not only that, they're making absolutely wild amounts of money while they do so. So, they have an opportunity and the resources to offset the damage done by the scale they profit from. But they have zero interest in doing so. I most definitely blame them for that.


>It's insane that any one company is expected to successfully moderate three billion people, they're doing the best they can with impossible constraints.

"moderate"? as far as i understand they stoke the fire. Whatever cultural wars are raging on FB, Coka vs. Pepsi, liberals vs. conservatives, FB gets engagement, ie. money, from the both sides. They are like fight cage owner - they get paid either way. And their "moderation" - just like the fight cage owner they may prevent the fighters from say using knives as it would adversely affect the quality of fight - ends too soon, etc. - and thus revenue.


> I feel that moderating a platform as massive as Facebook is essentially impossible.

Not impossible. A few years ago, one report found that Facebook would need to double the number of moderators, expand fact-checking, and take a few other actions. Facebook won't do it because they'd have to divert a small portion of their $39 billion in yearly net income toward that goal.

https://static1.squarespace.com/static/5b6df958f8370af3217d4...


> the problem with social is us.

No. Facebook proudly does specific psychological experiments on its users. Facebook intentionally decides that it will make some users sad or depressed, just to see if it can. And it can. And it is proud of it. Facebook is proud that it knows how to make you depressed, so much so that they published a paper describing how they made hundreds of thousands of their users fall into a depression.

Facebook is a fucking poison. The problem is not with us. The problem is with large scale propaganda networks and centrally-controlled social networks with misaligned incentives. The problem is Facebook and the core problem is the structure and incentives of corporations in our current not-well-enough-regulated capitalism.


> 2) they are wayyyyy too big to police content manually

Oh hell no they aren't too big, they just don't want to reduce their profits from the current 18.5 billion US $ (per https://www.statista.com/statistics/277229/facebooks-annual-...).

If Facebook wanted, they could massively ramp up staffing, hire them themselves, and pay them a livable wage instead of outsourcing the cost that their business occurs on society (who has to pick up the slack from conspiracy myths, fascism and other shit), on the users who are mistakenly banned and lose their primary contact e.g. to overseas family, and on the moderation staff that is hired via third parties and exploited.


> Facebook wouldn’t have these problems if they weren’t obsessed with acquiring as many users as possible.

Even small sites have people upload child porn and other stuff into them, and some IT guy has to go handle it.

The only thing that can really reduce the scale of this stuff is for the people doing it to suffer a consequence like jail time. That's clearly not happening in most cases, despite Facebook and others sharing information with the authorities.

The same is true for other issues like people being hounded by death/rape threats and the like online. The police mostly ignore it as an issue, so moderators have to read a bunch of it, and the people doing it just get a suspension or ban from the one platform.


>At Facebook's scale, there is not a good way to do it. Even if they were able to scale it, it would not be good enough anyway.

Then maybe we need to place limits on their scale if they can't handle the problems their scale has created.


> All of that recent fight against FB is driven by them “not censoring enough”

This is a gross distortion of the problem. Facebook has been caught actively promoting conspiracy theories and societal discord because it increases engagement (in my case it drove me from the site but the plural of anecdote is not data). Most recently it was reported that “angry” reactions were weighted five times more than “likes”.

This is not just backseat driving concerns, Facebook’s also is a major driver of our culture and societal values whether we like it or not. Nor is it Monday-morning quarterbacking, these concerns were raised by Facebook’s own employees. To let them abuse our social harmony for ad revenue is short sighted.


> 1) they don't feel they can NOT police content

True.

But we should be explicit here-- FB isn't some neat, nerdy social graph software that just happened to get really popular, like git for regular people or something. They have a business model to shake up their social graph to maximize engagement, and the implementation achieves this by keeping everyone maximally mad as hell at each other. It spreads conspiracy theories efficiently. It and its users are easily exploited, and they continue to be exploited by any nation-state, nefarious actors, or even just some dude looking to make a buck spreading fake news. That all feeds into the same destructive behaviors for the users as the system desperately tries to keep them engaging more and more. Facebook's own software and UI exacerbates these problems.

With that starting point in mind, I agree with your 1, 2, 3, and 4 above. But this is less like email having to deal with new sets of problems as it scaled to the entire world. It's more like a drug cartel trying to figure out the maximum amount of their fundamentally destructive product they can sell before the government finally decides to comes in and break up their enterprise.


> First, it’s $7.5 billion freaking dollars. Of OpEx.

So what? If that cost is necessary to run a platform that is safe for those who are on it (= from spammers, catfishers, threats, pedophiles or right-wing extremists) and that does not serve as a breeding ground for threats for society (= "freemen"/"Reichsbürger" groups, militias, neo-Nazis, antisemites, Qanon, ...), then this cost should be paid by the ones making the profit (=Facebook), not society.

> Second, it’s 250k human moderators who will feel obligated to, you know, moderate stuff.

So what? The digital equivalent of police(wo)men on patrol. Just because it's The Internet, it should not be a free-for-all land.

> How anyone could think 250,000 people running around with mod powers on people’s Facebook walls is not deeply, deeply problematic

The US alone has nearly 700k police officers for 300M people, that are running around with guns and regularly kill people, and yet society accepts this.


> The fault is with the people, not the tool they use.

I agree with that. The humanity ain’t good enough for Facebook’s community standard. Who do you think we should abolish — the humanity, or Facebook?

> I don´t see an answer to this problem

One simple solution is to ban advertisements on the web. To survive, Facebook, Google and similar will have to charge users for the services, this treating them as customers, not the product.


> this is a really hard problem to solve, and maybe there won't be any solution that satisfies the public

I'm not a fan of this argument. It reminds me of people saying that the big US banks are too big to fail during the sub prime mortgage crisis a decade ago. If the banks are too big to fail, then we better fix the regulation of those banks so they are less likely to fail or break the banks up.

If Facebook is too big to effectively moderate what it publishes, then reduce how much they publish (ie your Agency is going to be rate limited) or break up Facebook into pieces that can moderate their traffic.

next

Legal | privacy