Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

The system is not simply based on notoriety, as some kind of aggregate of follower count or likes, which would be sane and fair step in the right direction. But rather on a case by case basis, where according to the article "whitelist status was granted with little record of who had granted it and why, according to the 2019 audit.".

It easily ends up being a case of "im a moderator at facebook, and i like this person, and i put them on xcheck". Terrible ofcourse.

The larger problem at hand is that companies like Facebook are given such a gigantic power over discourse and politics because of their gatekeeping. We would often laugh at policies in China which bans people talking about Tiananmen Square, while seeing more or less the same happen in the west about our own controversial issues.

[sarcasm not directed at you] But it's ok. In the west companies are doing this, and companies are allowed to do business with whoever they want. It is not censorship therefore. [/sarcasm not directed at you]



sort by: page size:

> The system is not simply based on notoriety, as some kind of aggregate of follower count or likes, which would be sane and fair step in the right direction. But rather on a case by case basis, where according to the article "whitelist status was granted with little record of who had granted it and why, according to the 2019 audit.".

This was my takeaway as well. I 100% agree rules cannot be applied evenly across every user. A person sharing posts with their 300 "friends" and someone blasting messages at their millions of "followers" are frankly engaging in completely different experiences. The regular person might expect none of their comments to ever get reported and any report could be cause for something actually bad. A popular politician on the other hand might see every single thing they post reported a ton every single time.

And rather than applying rules based on say the reach (which Facebook knows) or any other metric, it seems that they just chucked people into the special people list and that's that. The article stated there are millions on that list. A catch all for all the people who are having the greatest impact seemingly. The fact that the list had considerations for potential blowback to FB is even worse. I get that in percentage terms of 2.8 billion users a multimillion person list is in outlier territory by most measures, but that group is also wildly influential and thus shouldn't be in the "too weird" category.

I'm not even opposed to a general whitelist, some people (like a President of the US) truly are gonna be really weird to apply any broader ruleset to. But a free for all and catch all bucket for anyone of "notoriety" is really bad. It should be a very special remedy that is not done lightly. The article made it seem like the policy for this particular remedy was non-existent.

Part of me thinks the solution is just to cap it. If the central conceit is "connecting people" then no person realistically knows more than say 10,000 people and shouldn't need the microphone scaled to global proportions. That'd never happen, but it seems like a root answer.


Facebooks decision to whitelist was entirely 'inner circle' and strategic.

It was not an open, rules-based scenario.

For example, there's no reason that 'Vine' did anything to breach the 'good actor' scenario - FB banned them for competitive reasons.

'Bumble' got special access because FB is an investor. (Did OkCupid? Tinder?)

Moroever, it seems that users were not properly notified of the 'good actors use' of their data.


Is Facebook actually moderating in good faith though?

Consider that divisive, offensive and false content is guaranteed to generate engagement and thus contribute to their bottom-line, while content that doesn't have these traits is less likely to do so. So they're already starting off the wrong way here, when their profits directly correlate with their negative impact on society.

Consider that there is plenty of bad content that violates their community standards on Facebook and such content doesn't even try to hide itself and is thus trivially detectable with automation: https://krebsonsecurity.com/2019/04/a-year-later-cybercrime-...

Consider that Instagram doesn't remove accounts with openly racist & anti-Semitic usernames even when reported: https://old.reddit.com/r/facepalm/comments/kz10nw/i_mean_if_...

Is Facebook truly moderating in good faith, or are they only moderating when the potential PR backlash from the bad content getting media attention greater than the revenue from the engagement around said content? I strongly suspect the latter.

Keep in mind that moderating a public forum is mostly a solved problem, people have done so (often benevolently) for decades. The social media companies' pleas about moderation being impossible at scale is bullshit - it's only impossible because they're trying to eat the cake and have it too. When the incentives are aligned, moderation is a solved problem.


> Facebook does (in a very limited way) this by upranking posts it thinks will boost engagement.

Facebook (and Twitter, and Patreon, and countless other firms) will block you from posting if you post something that they regard as violating their policy. And "what is against policy" is a fuzzy enough line that "don't use Winnie the Poo pictures to make fun of the effin' President of the country" is a lot clearer than that. You may agree with the policy or not, you may think it's misguided, but WeChat is a private firm, and private firms aren't restricted by anything like the First Amendment. They provide services to you at their pleasure.


This seems to suggest that the content moderation policy is rather arbitrary, which makes sense if you recall that Facebook uses underpaid contractors in struggling countries to do a lot of this work.

Also important to note that it's not as simple as having a whitelist of domains that are exempt, because at Facebook scale that immediately becomes an avenue for accusations of bias (see all of the noise around the twitter files).

Good luck with that, since everyone is pushing for censorship these days.

I remember this idea gained traction when there was the refugee crisis in Europe (I think 2014 or so) and European politicians (led by Merkel) were pushing for more Facebook censorship. The idea of more censorship was also popular among the citizens. However, it was mostly a European thing, although Facebook did comply to an extent. Then Donald fucking Trump happened. Now Americans were the ones who started pushing for censorship, even more strongly.

This is way beyond Facebook banning someone in bad faith, if Facebook has to censor a wide range of things, this can only be through automated systems due to the massive scale, and these systems will inevitably fuck up more and more as we add more things which they are supposed to censor. It is literally an unavoidable outcome even without bad faith. The kind of censorship people expect today just cannot work on a Facebook scale unless we accept that people will get banned unfairly all the time.


Yes, but I'm a classical tech libertarian so my definition of fine is "it made lots of users happy". If you think Facebook is filled with bad content you could just not go there, obviously.

Facebook's problem is that it has given in to the activist class that feels simply not going there is insufficient. Instead they feel a deep need to try and control people through communication and content platforms. This was historically understood to be a bad idea, exactly because the demands of such people are limitless. "Virally spreading falsehoods" is no different to the Chinese government's ban on "rumours", it's a category so vague that Facebook could spend every last cent they have on human moderators and you'd still consider the site to be filled with it. Especially because it's impossible to decide what is and isn't false: that's the root cause of this WSJ story! Their moderators keep disagreeing with each other and making embarrassing decisions, which is why they have a two-tier system where they try and protect famous people who could make a big noise from the worst of their random "falsehood detecting machine".


I think it's because Facebook is both more visible and personal than other platforms. Horrible, racist, violent, and outright evil content has existed on the internet since its inception. The difference is that journalists don't browse 4Chan and I imagine when people see content on anonymous forums they're better able to rationalize it because they can come up with a multitude of excuses for the poster. It's a little more surreal to see "real" people openly and publicly calling for the extermination of non-Aryan people.

I think that's why Facebook has been getting more criticism and attention in the media than Reddit or Twitter, despite wielding comparable social influence. On the flipside, I also think that's why people are more outraged about the "censorship" issue. If you ban a Reddit user you're just banning some username. If you ban someone on Facebook, you're banishing a person and stifling their voice.

The slippery slope arguments are sensationalist. Facebook had many motivations for implementing this policy, some of them undoubtedly motivated by money, but ultimately I would argue they implemented it because American and western society pushed them to. To the extent that western society has unequivocally decided that racism is very bad and not good, this policy change will also not push us over the edge into total censorship because that's not what we want (and I think it would be difficult to take the stance that that's the direction we're heading in.)


You can think that this is bad to expell people that way from social interactions with others. That it is bad for a minority to decide the news a majority should have easy access to.

And therefore, you could decide as a society that you want to forbid Facebook to do that.

It assumes you are in a democracy. China obviously is not. But if you're lucky to live in one, then you don't have to accept what Facebook does, at least in your country, because they are a private company.


Pretty sure the point is the double standard. Plenty of people are fine with Facebook doing it, but where are those same people standing up for Wechat's right to do whatever they want in their private walled garden?

One problem is that the power to vet/censor particular things may be beneficial in a certain pair of hands but disastrous in another, especially when you're dealing with a platform that has extremely powerful network effects and controls so much of the information flow in society. To reiterate the point of the previous comment, would you feel comfortable if FB came under control of the Saudis while it was engaged in extensive vetting? I would not. Naval says that one test of a good system is whether you can hand the keys over to your adversary and things don't go wrong, which applies well in this case.

Another problem is that vetting (/censorship) is often supported when it's "your side" doing the vetting, but I don't trust that any politically tilted group of individuals will engage in unbiased, non-partisan vetting. FB's staff is of a certain political demographic that differs substantially to the country at large and this will likely sway the decision making away from a fair and balanced outcome.

If FB or Twitter engages in vetting I would like to see it done in a satellite office set up explicitly for that purpose where great effort is made to select individuals without extreme political leanings.


In this age among democracies it also isn't just governments that are blocking content -- it's companies that play gatekeeper in communications between people. People have stopped reading traditional forms of interpersonal communication, so my only way to get messages out to friends is through platforms like Facebook.

Unfortunately, Facebook themselves like to play gatekeeper on what my friends see and don't see, which I personally think is unethical. For example, if I post something about the election, its visibility may be restricted among my friends, so I need to do all kinds of Unicode upside-down tricks to prevent Facebook from even knowing my post is about the election. If I post a Youtube link instead of a Facebook video, Facebook downranks it because Youtube is a competitor, and downranks it to the point that friends don't actually see it. Sometimes they downrank text only posts to the point that I need to include e.g. a cat picture to get Facebook to give it the proper visibility.

I once posted to my feed something about donating to a particular disaster relief NGO, only to be censored out by Facebook because they mostly censor external links (only a small fraction of friends actually see them). No friends saw it in the first hour in their feed. I asked several and they didn't see it in their feed. Fishy, eh? I delete the post and re-post the same thing saying "Google for XYZ to find the donation website" instead of an actual link and BAM 40+ likes in the first hour and several friends donated.

Personally I think this corporate censorship is unethical. It's okay for Facebook to play the ranking game with business-sponsored media but definitely NOT okay with them doing that between friends who have mutually agreed to follow each other. I'm okay with a "popular posts" section on top of everything else, but the "everything else" really must include every single post of all of the people I have chosen to be friends with unless I specifically tell them to mute a particular person's content.

Ironically, WeChat doesn't engage in this type of censorship. They obey government censorship, but besides that, I can guarantee on WeChat that a post that I make that passes the government test will be seen by every single contact if they happen to be looking at the feed. I much prefer that model, rather than Facebook's sporadic random non-transparent censoring.


Because FB is probably one of the few places actually trying to grapple with the scope of this problem in a manner that makes business/practical sense.

Not at FB.

Content moderation is developing so fast, and with problems so thorny, that the difference between intuitions of normal people and trust and safety teams is whiplash causing.

Current moderation practices are perhaps analogous to the ages of proto police forces. Since there is also no separation of powers, moderation is a cost center, and comprehension of moderation principles is nascent and disregarded outside of T&S teams, you have people from sales/strategy/Personal connections overturning rules/policy.

The speaker at this video: https://www.youtube.com/watch?v=J19Xa3-SN1M, has a book on this topic which discusses some of the many problems this field throws up.

FB is trying to resolve this problem by offloading the problematic thought policing to the state, in some way or form - white paper sent in the UK is a case in point.

This is likely the way forward for major firms, it seems unfortunate, but unless someone reframes the incentives and problems, government sanctioned censorship is the likely future.


I agree with the top post in the linked thread, in these situations I tend to assume incompetence rather than malice. Furthermore this type of censorship would actually be the best way to boost mastodon if only thanks to the Streisand effect.

I could imagine in particular that alternative social networks would often be advertised by people who specifically tend to post content not allowed on Facebook, which may cause a poorly calibrated filter to see the correlation and decide to mark "mastodon" as a blacklisted keyword.


Give me a break. Guidelines that are selectively enforced is just power tripping censorship. This is a website that allows people to post advertisements for cigarettes while stamping down on anything that shakes the boat when it comes to China or labor rights in the USA. I don't see anyone using your logic to defend FB although it would be just as applicable.

And to preempt people

https://news.ycombinator.com/item?id=24364535


I mean isn't this exactly what Oversight Board is supposedly for?

> The purpose of the board is to promote free expression by making principled, independent decisions regarding content on Facebook and Instagram and by issuing recommendations on the relevant Facebook Company Content Policy.

Wonder if we will ever actually see something appear on https://oversightboard.com/decision/. It took almost a year to put together and has apparently achieved very little in the few months it has been running apart from lots of bureaucracy.

Interesting to note that it aims to have 40 people on the board - the cynic in me feels that this is to ensure that decisive action is unlikely.


Spam filters don't only apply to emails and Facebook already has one, of course, to stop fake account signups and posting of commercial spam. Nonetheless spam filters are a form of content classification. Yes, I'm saying that Facebook should just "give up" and stop trying to do political moderation on top of their pre-existing spam classification. This is not radical. For years they didn't do things like fact check posts or try to auto-classify racism, the site was fine. It was during this period that Facebook grew like crazy and took over the world, so clearly real users were happy with the level and type of content classification during this time.

You ask - isn't moderation the problem they're trying to solve. This gets to the heart of the matter. What is the actual problem Facebook are facing here?

I read the WSJ article and the quotes from the internal documents. To me the articles and documents are discussing this problem: their content moderation is very unreliable (10% error rate according to Zuck) therefore they have created a massive whitelist of famous people who would otherwise get random takedowns which would expose the arbitrary and flaky nature of the system. By their own telling this is unfair, unequal, runs counter to the site's founding principles and has led to bad things like lying to their own oversight board.

It's clear from this thread that some HN posters are reading it and seeing a different problem: content moderation isn't aggressive enough and stupid decisions, like labelling a discussion of paint as racist, should just apply to everyone without exception.

I think the actual problem is better interpreted the first way. Facebook created XCheck because their content moderation is horrifically unreliable. This is not inherent to the nature of automated decision making, as Gmail spam filtering shows - it works fine, is uncontroversial and makes users happy regardless of their politics. Rather, it's inherent to the extremely vague "rules" they're trying to enforce, which aren't really rules at all but rather an attempt to guess what might inflame political activists of various kinds, mostly on the left. But most people aren't activists. If they just rolled back their content policies to what they were seven or eight years ago, the NYT set would go berserk, but most other people wouldn't care much or would actively approve. After all, they didn't care before and Facebook's own documents show that their own userbase is making a sport out of mocking their terrible moderation.

Finally, you edited your comment to add a personal attack claiming I'm making "off topic political comments". There could be nothing more on-topic, as you must know from reading the article. The XCheck system exists to stop celebrities and politicians from being randomly whacked by an out of control auto-moderator system. A major source of controversy inside Facebook was when it was realized that the system was protecting political incumbents simply because they were more famous than their political challengers, thus giving the ruling parties a literal exemption from the rules that applied to other politicians. The political nature of the system and impact of moderation on politics is an overriding theme of concern throughout the documents. You can read the article for free if you sign in - that's how I did it.


This may work if Facebook were just a dumb hoster of content. The algorithm promotes content and sends notification in an attempt to generate ad revenue, eyballs and 'engagement'. This is less about censorship. Facebook is not neutral. Since they make this choice of spreading content they should be responsible for its effects. In economics, we call these externalities. Facebook is not paying the spread of misinformation, manipulation or genocide it helps enable (see Myanmar). Asking them to safeguard the platform is just asking them to take responsibility for what their algorithm recommends.
next

Legal | privacy