Somewhat ironically, changing the domain to fullwsj.com will redirect you to wsj.com stories via Facebook's redirector and that's currently not paywalled.
Perhaps it's just me, but reddit front page is much more toxic and filled with misinformation compared to most Facebook groups.
(Niche subs are really good though).
The biggest one in terms of users is Facebook. Everyone has given Facebook so many chances to change for years and yet, they are incapable of changing.
Not even the fines are scaring them. They are so tiny, they are laughing at them whilst they rake in billions in revenue.
They will never change and it will only get worse. They are the problem.
If what fb does makes tons of money and doesn't break the law, why would you expect them to change? And sure you can say well change the law. But the law that lets them do it is the 1st amendment.
Facebook is the most explicitly duplicitious sociopathic company in the tech sector. Many companies are sociopathic, especially as you get into pure finance companies like PE firms, but few are as duplicitous as Facebook.
That's why they're on my "wouldn't work for them if they wee the last tech company in the entire world" list. That list isn't long, but, FB is near the top of it.
Agreed. I have a similar list, it's not super long, but most of the FAAMG are on it, and Facebook is #1 on the "will never" list. Everybody has a price, but some things are non-negotiable.
So there's (for want of a more precisely nuanced way to put it) the default/knee-jerk outrage response to this, aka "how dare there be people above me in society that get the final word on what I can/cannot do without lawful recourse" etc etc.
But then... "more equal than others", taken in strict isolation, kind of goes off on an interesting tangent about how certain personality types are intrinsically socially compatible with each other; less jarring/grating, and more... resonant.
Stream-of-consciousness question: at a fundamental level, is there an exact point that this resonance, which is arguably benign at face value, can end up enabling social ${in}equality?
Not quite talking about individual scenarios of powerful person A taking an effectively entropically arbitrary liking to random person B and elevating them, ideally without [requirement for] compromise.
I mean more in the generalized sense, looking more from the perspective of network/emergent effects.
> Those included in the XCheck program, according to Facebook documents, include, in top row: Neymar, Donald Trump, Donald Trump, Jr. and Mark Zuckerberg, and in bottom row, Elizabeth Warren, Dan Scavino, Candace Owens and Doug the Pug.
Unfortunately Facebook has no choice. If you do not make your platform appealing for popular people you will have a hard time attracting their followers there.
How unfortunate it is that such a thing is seen as a "rock and a hard place". Making less money might just be the worst thing possible for a corporation like Facebook.
This is a very good question. What if I'm an Hollywood superstar or an NBA player or a billionaire and I want to chat and share stuff privately with my "colleagues".
Did the really use fb or instagram like we all do? I always see "official pages" for people like Bill Gates, but where/what do they share in their day to day life?
Ask yourself what you would use if you didn't want the general public to see it, while allowing your friends to see it? You would use anonymous accounts, or you would use use the privacy tools these platforms offer, or you would just use direct messaging apps.
Mitt Romney, the senator from Utah, former Republican presidential candidate and former Massachusetts governor, is also, apparently, the man behind a Twitter account that uses the moniker “Pierre Delecto.”
I liked that, you FINALLY got a taste of the Real Romney.
I had dinner with him and his family. I wasn't before, during, or afterwards a fan. To sum it up quickly, I have never in my life been in the company of people so removed from the everyday working man while being waited on by them. Truly an amazing experience that I look back on with a pre and post understanding of "the elite".
My guess is that pretty much everyone has misconceptions about the experiences of everyday working folk in that the working non-elite are populous and diverse. The key observation is if their mental models are helpful and for who.
I was with well off people that were still humans. The Romney’s I was with, I’m really not sure.
If there was a conservative thought between any of them I didn’t hear it. Nor did I hear anything that I would have described as a “good take” on any thing, like if he were to suggest something on topic X, that neither proponents nor opponents of X would consider the idea valid. In a two hours, I don’t imagine I heard a single honest heartfelt conviction.
Without giving up enough to ID me to anyone at any time in the future, I can say this… the world would burn if the Romney’s were in charge.
Elite in the title might make you think they mean a politician or celebrity. It’s really Facebook outsourcing to “XCheck” who has 5.8 million memebers and somehow avoids Facebook’s “moderation”.
I don't understand the way their enforcement works. I've reported videos of people literally setting live animals on fire and been told there was no violation, but my wife called someone a "loser" and got a week long ban.
I had the option to have the post re-reviewed, which took two days. I mean it could just be theatre, but I assumed on the second round a human reviewed it.
From the support response:
> The post was reviewed, and though it doesn't go against one of our specific Community Standards, you did the right thing by letting us know about it.
Setting squirrels on fire and watching the poor things scurry around I guess is cool with Facebook's Community Standards.
> Setting squirrels on fire and watching the poor things scurry around I guess is cool with Facebook's Community Standards.
Unfortunately nothing else you can do about it, either. Who do you even report this to? There's no LEO agency that would spend resources on that even though this is a well known pre-cursor to homicide.
Shit like that reminds me how failed society is - to be able to literally torture animals and face little to no repercussions, and get tons of clout and maybe even some money (ad revenue or whatever) in the process.
The problem with animal cruelty is that modern industrial animal farming is torture, and torture of the worst kind. So it's diffucult to draw the line without angering powerful groups and rich advertisers.
A squirrel that lives freely in nature and is once set on fire, that it will likely survive (and even if not) has a better life than a sow in a cage indoors where it can't move, is constantly pregnant and crushes its own babies because there is no space.
Most crimes, even if reported, are not meaningfully investigated. I am not sure that is really society failing. Society is still better on that than ever before.
Yes. Precisely. I think it is because the community of squirrels have no ability to retaliate.
If we observe the trend, the pull down are proportionate to the strength of the retaliators. Being against say LGBT isn't the same as being anti christian. Hit a particular group of people or ideology, the bans are well automated at this point.
They can't retaliate against social media platforms. And as to your garden, they aren't retaliating, they simply explore the commons to continue on life. Theirs and also contribute to a well balanced ecosystem for life as a whole to continue.
You are in your right to scare them off, of somewhat fence your goods to keep them at bay. Torturing those animals, or any animal for that matter does nothing to controlling their damage to your previous fruits of labor. Keeping them away is the best investment of time and effort if you don't see their value.
Content policy is just like airport security: a theater. You cannot take a bottle of water on board a plane, but you can take a laptop with enormous batteries. In my experience it's much easier to set lithium batteries on fire, than water. But what do I know.
Anyone want to bet that when some major news publication does a story about how these types of videos are being spread on Facebook they announce that they go against their community standards and that the company had no idea this is going on.
Some of the actions are automated based on some NN algorithm score, and then the appeals are human-powered. They have large third party content review offices that are operated like call centers in which humans review these things. I understand they're real meat grinders to work in.
I've reported clearly racist, harassing content before and had the reviewer report it as confirming to their standards. I know people that were banned for bullying for wishing people happy birthday. As much as I suspected a bunch of people are just quickly mashing random buttons to pump up their score, I read that they're evaluated based on the success and failure of appeals to their judgements, so I can't imagine they would be. There are clearly deep-seated problems with this process.
This is actually discouraging non-brigades from reporting. I reported obviously spam accounts and got the same feedback after a few weeks. Now I don't bother.
Brigades on the other hand have the motivation to play the numbers.
I once made fun of Justin Bieber(said he acts like a baby) on IG, and got a warning. Some guy threatened to Hunt me and my family, kill us and do bad things to our bodies and IG said it didn’t violate any rules, when I reported it. My account can now not even post the word “chump” without warnings. Talk about backwards.
It’s very safe to say there is no adult in the room at FB/IG when it comes to rule enforcement. I simply cannot wait until they get the whip from some governments.
Build an open messaging protocol and decent clients. Oh wait, some did exactly that.
The problem sadly isn't tech related. It's education. Corporates will market to the masses who are mostly tech ignorant, if not politically and economically ignorant too. So even with alternatives, the power of commercial communication is greater than what non profit are able to afford.
We will have to wait for people to suffer further and further, more will open their eyes to it, until radical resistance settles in people's limbic systems. We are getting close I think, I hope.
Honest question: have you tried writing first? I communicate with a few people via email every few months. It might start with a forwarded email, or just a quick how-do-you-do, then it deepens into long multi-paragraph replies over the course of days. Being able to sit down to write and rewrite what's been going on without someone watching the little typing bubble means I can get more in-depth with how I've been feeling. I would give it a shot. Try sending people a quick email. If they never reply, no biggie. If they do, you may be surprised at the result.
I would feel kinda silly asking for someone's email address over a chat messenger, but you are correct there's no attempt my side either. "nobody would go for it" is an assumption on my part
Let us know how it goes. Our limbic system is more accurate than our perception abilities.
The issue with email is partly a slight inconvenience. Unless you input large content, the UX time overhead is significant enough to prefer IM. EMails has not been designed for frequent back and forth on the same thread even if occasional. From there you loose most people. Email hasn't changed much, from the time we barely had Internet connections, the burden to find a connected client was enough to forgive the UX issue.
Enter email address, autocomplete, type text, click send is enough a deterent for most of us to favor a chat window, type and press enter. And scroll up to see what was communicated prior. Got an image or audio? Drop it in, press enter. You can even record a quick audio message and boom, sent. Email client and protocol simply don't support that. At best you get addons which aren't necessarily supported on the other side.
Openness is the solution, but email protocol doesn't have what is needed for current needs of communication. Corporates building these tools now have the network effect, keeping the crowd in their walled gardens.
I have no overlap between people I know IRL and people I know on SM. They are wholly separate worlds. If I know them IRL, it's text and email - never ever SM.
For me, the point of SM is burnable bridges. It's a place to take risks and later apply the lessons to meatspace. In spite of that, I've cultivated many lasting online relationships but I can't see myself ever meeting any of them in person.
Might be an unpopular opinion but if you have a connection that exists only via social media and not at least via phone calling or some other more personal forms of communication too, its not much of a real connection.
I wish I had time to text and call every one of my friends, but sadly I just don't. Facebook is perfect because I can post there and all my friends see it without me having to call or text every one of them the same story.
Then when we do get a chance to meet up, we can skip right to the discussion of the thing instead of me first having to tell the story.
Especially since some of my friends live very far away and have very busy jobs so we only see each other every few years, but this way I can still keep up on their lives and they on mine.
Fair and valid question. I got rid of Facebook over a year ago but still use IG, but I cringe when I do.
Sadly I’m happy I left Facebook but I will admit I’ve missed a lot of news and events in friends personal lives. A good friends mom, whom I was very fond of, passed away and I learned about it months later. Another friend had a fast growing tumor and I missed that news and never got a last phone call with him. Both of which I regret missing out on.
I’ll still maintain I’m happy with my departure, but it has its drawbacks unfortunately.
I've been a social media hermit for basically my entire life so far but folded in and installed IG a couple weeks ago once I entered university. It's sadly just the norm. Telling someone my age to "just not use social media" seems like a boomer's shriek, and almost every club or association manages does all its' event coordination and stuff over IG.
It's extremely hard to get by, keep up with people, or even make friends without it.
The same could also be said for Discord as well, which I've seen over the past 4-5 years grow from a gamer-exclusive chat platform to what is probably the #1 choice for students nowadays for group interactions.
As far as we've come though I still think these kinds of things are still in their infancy when it comes to their impact on us as a society, so I guess the best thing to do is just wait and see what happens.
Because they employ blitzkrieg tactics that nobody should be able to get away with and by the time we notice it's too late to change our consumption habits
Yeah I gave up reporting. I’ve reported some people being extremely racist in comments, no action in either case. It’s either moderated by racist people, some poor AI or “rand()%2==0”
Same, I've reported a ton of death threats only to be told they're not in violation. Only for my mum to cop an autoban for calling someone a spring chicken.
My wife (an American) also got flagged for saying "Americans are selfish". She then made a post about our RV (camper) asking about sewer "hook ups" at a campground and was flagged for posting what looked like a sex ad.
We (the kids and I) now lovingly call her "hate speech Mom".
Especially after Jan 6, there are a couple of things you can say in an ordinary spirited political debate that will cop you a ban on FB. One is several flavors of "Americans are X," another is variants on "Kill the filibuster" (which I assume is pattern-matching to '[violence-word] the [congress-word], which they probably up-sampled in the threat modeling for, uh, obvious reasons).
The worse part is that in her eagerness to close the "prompt" on her phone she "agreed" she had posted this content (instead of appeal), which probably put some sort of permanent mark on her record. One can only hope she gets kicked off for good one of these days!
wtf?? I haven't been on FB for about 10 years now and every now and then a comment like this comes along which makes me realize just how out of touch with the global bureaucracy I've become
So Facebook now has an independent review board to determine whether their decisions to ban somebody follow their own policies. You can flag a decision to suspend your account for review by that board, but most decisions so flagged will not be reviewed.
I see it, and I note that (a) that's editorializing by the WSJ based on their interpretation of comments from law professor Kate Klonick and (b) the underlying facts are that Facebook claimed XCheck is used in "a small number of decisions" and the evidence in that article doesn't contradict that claim.
Nothing in the article gives hard numbers, so (unless WSJ has those numbers and forgot to report them), we have to extrapolate. XCheck-flagged accounts grew to 5.8 million users, but Facebook has 1.9 billion daily actives. If we assume about equal numbers of issues from the XCheck and non-XCheck accounts, XCheck accounts would make up less than 0.5% of all incidents. That's "a small number" if you're thinking in ratios. If you're thinking in absolute numbers, well, we don't have enough data to know what the absolute count looks like. Could be that a lot of XCheck'd accounts have zero incidents. Insufficient data.
I got warned for hate speech on FB for saying in a comment that Americans have the memory of a goldfish. I appealed it, the appeal was declined and my hate speech warning remains on my permanent FB record as being against community standards.
Pretty comical, considering it was accurate in context, and while you'd think American 1st amendment free speech rights would count, they don't, because FB is the private property of Zuckerberg.
No need for someone to point out that it's a publicly traded company. Zuckerberg controls 57.9% of the voting shares of FB. It is his personal property that he allows others to have an inconsequential piece of and everything that is wrong on the platform is because of him.
I got a 48h ban for calling the Japanese military "the japs" in the context of the Rape of Nanking. Wouldn't want to offend the group that raped and murdered millions now, would we?
A friend got a 48h ban for calling herself a "rital," a term for an Italian immigrant in France that used to be derogatory a century ago.
This is also the same company that allowed a terrorist to livestream a killing spree for 17 minutes despite it being reported over and over again. To add insult to injury they allowed copies of the same footage to proliferate across their platform for weeks.
Facebook spend a lot on PR talking up their AI capabilities and how it's being applied towards moderation. Would be nice if it actually worked.
> In a written statement, Facebook spokesman Andy Stone said criticism of XCheck was fair, but added that the system “was designed for an important reason: to create an additional step so we can accurately enforce policies on content that could require more understanding.”
Would be great if us plebs could get the privilege of accurately enforced policies.
This kind of makes sense. For every high profile person posting an ML-flagged “Napalm Girl” in the context of discussion around the Vietnam war, there’s thousands of instances of real child porn.
it cuts both ways. for every high profile person posting real porn (eg Neymar's revenge porn in the article) there are thousands of instances of regular users with ML flagged 'napalm girls'
Well of course. Why wouldn't they? Money and friends aside, they have to do it for people who wield power like China's upper echelon or the NSA. If they say no, they will be shut down in those markets.
Then to put it more simply, large companies must tow to influential requests because that influence determines the flow of money. Govt, private, or individual -- doesn't matter.
And Facebook is not actually shut down in China. It's partially blocked and working on a censorship project to reinstate itself there.
>“We are not actually doing what we say we do publicly,” said the confidential review.
Why do we even have the collective fiction where corporate messaging around a sore topic is treated as trustworthy? Especially with companies that we know lie all the fucking time?
Every FAANG company compulsively lies to it's customers, and it's a shame because it only encourages up-and-comers to "imitate the best". Making matters worse, American politicians are utterly ill-equipped to handle this kind of deception. Not only do they likely profit off the success of Facebook, there are numerous domestic interests in preserving their control. On top of that, nobody can pull the plug because it's wrapped up with the CIA, FBI, FCC and FTC.
Why out of the largest tech companies, like Amazon, Google, Netflix, Microsoft, Facebook, Apple, Twitter (well not large but similar to FB in terms of social network) only one has a reputation for constantly lying and misleading people on purpose in its self-interest? Is it because we pay more attention to FB or is it because FB is different in some way?
I dont actually see a reason to believe that _any_ of them would be telling the truth _ever_. There's no incentive to be honest and there's plenty incentive to not be.
I put all of the companies you name in pretty much the same basket, and avoid them as much as I can. I don't shop on Amazon, I don't use Netflix, Microsoft, Facebook, Twitter. I don't use Apple products. I do use Android and Google Docs/Drive/Mail but my next phone will probably be an open one and I could drop the GSuite stuff without too much pain, mostly laziness that I haven't already.
I'm in the same boat. Trying my best to de-FAANG my life. Self hosting as much as possible, with as little management overhead as possible, but I'm still stuck on Android and I don't know how to break free.
I'm self hosting Ghost for blogging, Home Assistant for smart home controller, and in the middle of setting up Vaultwarden for passwords. I also run a lot of stuff off my Synology - Synology Drive instead of Dropbox or Google Drive, Synology Photos instead of Google Photos. I don't have a great solution for email or phone - emails is paid hosting through Zoho and use Android for phone. I'd like to get off those. It's all a long drawn out process.
In a few years this will be considered the only sane approach to digital life. There is still some road to travel though in terms of making it easy for the majority of people.
In retrospect the "big tech" era will be such a sad, dark, insidiously toxic period. So much hypocrisy, so much in-your face failure to honor basic social contracts, so much misallocated talent...
In this particular case, it's only Facebook and Twitter that are significant social networks and they are responsible for the biggest spread of misinformation. Google is a close third, if you hold Google responsible for its own search results (and not the websites they link to which their algorithm considers most important).
I mean I think they're all overgrown capitalist machines that thirst for their users' data, mindshare and money, and all of them have a heap of dirty secrets that either have or will leak out sooner or later. And none of their dirty secrets - like this 'revelation' that Facebook has a database of favorites - will be surprising.
Twitter has it too - Trump got away with stuff most people would be instantly banned for. They cite he is a person of high importance, but the real reason is that Trump and the ripple effect each of his tweets had were responsible for a big chunk of their annual revenue.
Remember a few years ago that Twitter was struggling financially or stagnating in terms of activity and users? I'm sure I remember a few articles about that. But since then, Trump and some other populist politicians and commentators have caused big waves on there, because each post starts a very big and long discussion involving thousands if not tens of thousands of people, all of them having 'hot takes' on things.
TL;DR they exempt people from the rules because they make them the most money.
> Facebook and Twitter [are] responsible for the biggest spread of misinformation
I really wish it was that simple. if it was, we could just ban it all and have done with it. For the US this is a symptom of the splitting of a country into multiple warring parts. Partly whipped up by news networks, print journalism and all by the constant war for your attention.
TV news picks up some stupid tweet, offers it as an morsel for 5 minutes of hate. This pissed people off, they got online and berate the original tweet, the "other side" counter attacks, rinse, repeat. (see critical race theory)
The general public are being played, so that a number of large corporations can get attention enough to sell advertising space.
> Google is a close third, if you hold Google responsible for its own search results
why wouldn't you? I mean they are well known for allowing advertisers to manipulate results. They track your location, what your reading, who your talking to, and sell the products to third parties. If we should be keeping an eye on anyone, it should be google. The level of questioning that FB gets must be applied to any of the internet giants.
> Google is a close third, if you hold Google responsible for its own search results (and not the websites they link to which their algorithm considers most important).
If you include the videos that YouTube recommends, they pull even with Facebook and Twitter.
They are all optimizing for the ability of content to keep eyeballs glued to the screen (so they can show more ads), and nothing else.
>in Google’s effort to keep people on its video platform as long as possible, “its algorithm seems to have concluded that people are drawn to content that is more extreme than what they started with—or to incendiary content in general,” and adds, “It is also possible that YouTube’s recommender algorithm has a bias toward inflammatory content.”
Microsoft was a serial liar for decades over its efforts to quash competition for DOS, Windows, Internet Explorer, and its "embrace, extend, and extinguish" strategy. Amazon's statements about working conditions in its warehouses and its treatment of workers trying to unionize directly contradict documented actions. Twitter says one thing regarding abusive and hateful users but does another. Google finessed its theft of Java.
Some would say that you don't get to be as large and as profitable as those companies without resorting to mendacity and rule-breaking. Some would even say that such moves are required and acceptable
> Is it because we pay more attention to FB or is it because FB is different in some way?
Facebook probably has the most well-known face of any of these companies in Mark Zuckerberg. And one difference between Mark and other typical cut-throat business leaders is that he's got a reputation for being weirdly socially awkward in a noticeable way to the extent that there's numerous memes about him being a robot or alien.
Tim Cook, Jeff Bezos, and others might be similarly ruthless or worse entrepreneurs, but they don't come across as the same level of creepy even though they also want to own your data, put cameras and microphones in your house, just as much as Facebook.
Further, why do we continue to accept the fiction that rules and laws apply to the elite as for the masses? Even the not-so-elite, the Stanford swimmer who brutally raped and assaulted Chanel Miller, though convicted, was given light sentence because of his "bright future", according to the judge.
Is there anyone walking around that actually believes that rules and laws apply to the elite?
Some seem to think that the reason this is excused is because (in the US at least) every poor schmuck seem to think they they will be rich and powerful someday and get to partake. I've never found that very credible, but smarter people than me seem to think so.
In other words, when the elite are impacted the full force of law will be brought to bear on the perpetrator of the injustice. This used to be called aristocracy. We've spent the past 250 years pretending it doesn't exist here in the States.
I find it funny how we use phrases like "Russian oligarchs" all the time without batting an eye, but we never use them when referring to the Bushes or the Clintons in the US.
The power resides in generational wealth not some commoners who got lucky. These people don't need to hold elected office because they can buy whatever they want, including manipulating government in their favor.
The Clintons and Bushes are not really oligarchs. They represent a monarchy maybe of inherited wealth and status.
Steve Jobs, Bill Gates, Sergey Brin, Mark Zuckerburg are better examples of oligarchs because they managed to create their own wealth and with that obtained political clout.
> The system tends to be very good at punishing rich people that swindle money from other rich people.
… Eventually? I'm surely not the only one who thinks of Madoff first, and he got away with it for almost a decade after suspicions were first reported.
Madoff got 150 years once he was convicted, though you are correct, I probably should have stated that the system is very good at punishing rich people that are CAUGHT swindling other rich people.
It's a convenient fiction the rich/elite propagate through funding stupid plot lines in books/TV/movies while hiding that the elite get off scott-free and later expunge the records.
Convenient because it allows the state to continue to pass more and more draconian laws that prevent any change to the status quo in which the rich get richer and poor are further dehumanized.
Yeah I wonder why this is. Perhaps misplaced idealism of human nature and authority. The Judge, CEO, or the Journalist despite their lofty titles are human too and can be influenced by others. They also are not immune to their own personal desires and biases as well.
Maybe we could consider it an easier explanation rather than a simple one. It seems like splitting hairs but its important
The rule of law however has been continuously pursued across centuries and different societies in an attempt to subvert this 'default' operating mechanism.
I would argue the nature bit is where you favor yourself. Its hard to find people who will disfavor themselves in the name of egalitarianism.
The rest is the result of resources/funds being concentrated in the hands of people who also end up with power due to said resources/funds.
Yes. Donald Trump was impeached twice and has an unknown number of civil and criminal investigations ongoing. If a billionaire and the leader of a major political party is in this much trouble, it kind of shows that elites do get in trouble.
Trump is an extreme example. He basically did everything he could to get himself into trouble. He operated as a troll essentially and instigated the public on Twitter, followed by an attempt to overthrow the government.
If we looked at normal and even favorable politicians like Nancy Pelosi or President Biden, we might find something worth investigating as well but has gotten swept under the rug.
Punishing politicians in itself can be political even...
Oh but that's part of the fiction and the rigging. If he were a poor non-white person, he'd have been in prison for the last 30+ years, or after whenever his first swindle and self-aggrandizing fraud happened.
Another way to look at it is that an elite mired in so much corruption and antisocial behavior was rewarded by being elected POTUS and allowed to finish his term and orchestrate an insurrection without consequence.
Impeachment + acquittal and open trials aren't punishment. They are the very opposite - they show how we make a show of justice for the elites, while doing nothing practical. I can promise you that nothing substantial will come of all these trials - he will at the very worse have to pay some small percentage of his huge fortune.
Brock Turner wasn't elite in any sense besides his acceptance into Stanford, which had been revoked. His father was an electrical engineer and his mom was a nurse.
GP referred to "not-so-elite." In this case, the small bit of "eliteness" would be white, male, athlete (All-American), upper middle class, accepted into a good college.
And that's probably enough to justify saying he has a "bright future," in this country and day and age.
There are peculiarities of eliteness in America that matter here. Because Americans maintain the fiction of a classless society and don't have the legal framework of a caste system, what counts as elite in America affects how the world views American elites, and how Americans view elites in other cultures. We don't for example, care how many cows in a bride's dowry.
The number of cows in a bride’s dowry is just a straightforward proxy for wealth, and that’s pretty universal.
Sure, in some cultures they use the number of cows as the reference point, in others they use the jewelry worn or the car driven by the individual. I know that some people compete in who can afford to spend the most money on their wedding too.
All of these are literally just another way of quantifying wealth through displays of it (regardless of whether it is real wealth or they just decided to spend all their savings on a $100k wedding). And I dont see how this is somehow unique to the US at all.
Can you imagine a judge in the US letting someone convicted of sexual assault off with a light sentence because he has a lot of cows? The point is that the elitism is not just relative to other Americans, but to the rest of the world.
> Because Americans maintain the fiction of a classless society
There is nobody in the US that believes US society is classless. Talk about fictions.
Everyone knows upper class, middle class, lower class segmentation. So how is it you reconcile your premise of Americans not thinking there are classes when everyone in the country defines themselves by such structures? Americans are taught the class structures all throughout school. Americans are informed about the class structures 24/7 by popular media and news, from the NYTimes to CNBC and everything inbetween.
The hyper rich and the poor have been an always part of US society. There are no exceptions in terms of grasping the distinction, nobody fails to get it. The US has had hyper rich and poor since its founding and everyone here has always been aware of the divisions. It's in our history books, it's in our earliest literature. It's omnipresent as a thing.
Before there was the industrial wealthy and working poor, there were the land barons and British lords, farmers, agrarian workers and the slaves that were brought to the US by the conquering European empires. The class systems here quite pre-date the country, so yes, Americans are fully aware of it all. It has never not been part of US society.
The fiction is that there's class mobility. That's part of why others commenting here have tilted towards minimizing the eliteness of a criminal convicted of sexual assault: they want to think that his "hard work" landed him where he is and that they, too, might accomplish as much once they grasp their own bootstraps.
I thought the whole point of them complaining about "rules for thee, not for me" and "the elite" were class complaints, but now the goalposts are moved and so anyone who excels at anything is part of "the elite"?
Yes, I suppose the kulaks were members of the bourgeoisie after all, for they had skills, and skills are capital, and thus they are part of the oppressor class -- the elite.
Indeed, his mother, a nurse, and his father, as an engineer, both possessed intellectual capital and are thus counterrevolutionaries.
Just to be clear: the above is somewhat tongue in cheek and obviously nothing about the boy's background should be relevant in a violent crime case, and his was particularly disgusting.
Seriously, top athletes live on another planet in University. I remember working at the UT-Austin textbook store in my 20s and while everyone else had to find what they needed on their own, the football team had special permits they could bring you to not only get their books for free but you had to go get them for them
My understanding is that he also had a history of giving light sentences, not just to upper middle class defendants, and that his banishment has gone well beyond losing his job.
He is far from the only case of the rich getting differential treatment under the law. These stories are really a dime a dozen. How about this case of a rich boy from the affluent suburbs of LA (palos verdes estates, where Trump's golf course is), decides its cool to become a gang banger, gets involved with a murder, and is acquitted.
The existence of a remedy isn't a good excuse for avoidable injuries. The resolutions you allude to in that case took place after the public became incensed at the delicate treatment handed out to a guy who was literally caught in the act of humping a passed out woman behind a dumpster.
There are quite a few people who really, really want the caste system implemented in the US. Extreme class disparity embedded into our justice system. We understate these efforts at our peril.
He was given a light sentence, but its worth noting that the judge was following the recommendation of the probation department. This wasn't a judge going rogue on his own.
as the principal adjudicator, the judge can't be absolved of his responsibility by pointing to advisories. that the probationers were biased in their recommendation doesn't absolve him of his own biased judgment, and whether he's actually biased or not is a matter of the totality of his judicial record, not just this case.
This isn't just an issue with the elites, though. Locally I've noticed that when you look into the perpetrators of a lot of violent crime, they often have a string of prior cases where they were let off with almost no punishment (pleading down to a lesser crime, given probation that's not followed up on, suspended sentences, etc.). Then you have other cases where someone has done something relatively minor (or doesn't seem to have done anything at all), and the book gets thrown at them.
I'd say the American justice system is capricious more than anything. Plea bargains - which is extremely common in America but extremely rare to non-existent in most countries - also play a big roll. The guilty can reduce their sentence, while the innocent are threatened with years in prison unless they forfeit their right to defend themselves.
replying to cratermoon as a English born nearly retired child of a now almost entirely American family, whose parents totalled 100 years of life and was taught to program and think by my uncle who would be 121 years old if alive and who worked intimately with with the American war command, I have no answer to your question other than that is the defining of the American Way to believe that the rules apply for all of us equally. The definition of 20th century British nature is to be the inbred product of generations of ancestors who never doubted the rules for the privileged are not even comparable to those for the populace.
I am too old for reconsidering the possibilities of a investment linked naturalisation process, but the development in American political culture since 2016 has convinced me that I would be committed to doing everything possible to reverse recent real and far more damaging perceived decay of moral and judicial common citizenry in the USA. The rest of the world doesn't know how terrible this is for everyone.
And this is why I believe law should be truly blind. Any details such as race, gender, sex, education, political leaning in cases should be hidden and only public prosecution and defenders be allowed. We could easily handle whole process via text.
In short, I'd guess that's because the companies reporting on them are owned by billionaires, and shareholders with bags of fossil fuel, war, banking, etc, stocks.
For example, the company reporting this is the WSJ, owned by Murdoch. Why are the WSJ seen as respectable?
Why are any of the MSM seen as respectable? They're all objectively untrustworthy on basically everything except sport. They're all in it together, and the sooner we cut them out of our collective headspace the better our chance of survival.
Let's not overestimate the critical thinking skill advantage separating journalists from other people. I can check the citations as well as any editor, sometimes better because I know some things about math and have no reason to be biased.
When there's no scene to be at with a camera and nobody is getting interviewed, I just don't see what the media has to add. Spending years pouring over account records? Interviewing a eyewitness to get the real story from a hundred conflicting ones? Combing through a million tweets to find one with a video of a natural disaster? Those are things that a journalist could conceivably do better than I could.
Most journalists nowadays seem pretty gullible and hard set on their preconceived world views. I fail to see any advantage in critical thinking skills on the media class.
Journalists fail all the time, for sure, but there's a lot of journalists in the world churning out content every day. We tend to focus on the failures, but I don't see any evidence that the failures represent even a majority of the content.
Well here's one thing: they tend to cite sources. Do you have any actual data on your assertion? What you "fail to see" is merely a statement of your observation skills. I would like to compare that with other sources that aren't so clearly biased.
Yeah, I've got it. Citing the source is kind of basic. But, by itself, alone, it doesn't amount to much, if they can't even keep fidelity to the original source, if they can't even begin to understand the content they are citing. And this thing happens all the time.
Can you provide data about this? Can you provide data over time - like is this a new thing?
Until then you are just doing the same thing as the parent: making a bunch of unsubstantiated claims based on what can only be labeled as "your own bias".
I'd think that you "media hater" folks would act differently than you are. Like you claim to hate the media for misrepresenting the truth, but then refuse to actually back up your claims with any real data. If it wasn't so sad, it would be funny.
That's kind of the problem: the "legacy" media are unreliable, but people seem to take that as license to transfer total trust to some completely random media organisation that has god-knows-what agenda. Because it's very difficult to operate in an environment of total paranoia about every statement.
"Total paranoia" might be overselling it a bit. "Total skepticism", that is, "presumed bullshit until otherwise substantiated" is much more reasonable, and it can be applied to all media sources, legacy and otherwise.
You can't just say "cut them out" without some replacement way of disseminating similar knowledge around. The media has plenty of issues, but as a whole, they are still the best way we have of doing that. At least with media we have a good sense of where their bias is from outfit to outfit and can take in additional information or get it from multiple sources to combat those biases.
So I put the question back to you, what would replace it? And one answer I won't accept is individuals without any oversight at all - that isn't a viable answer (i.e. blogs, video, social media, etc.).
Do we have that collective fiction? I don't personally know anyone who thought that Facebook applied its rules to all equally, and certainly nobody here in the HN comments seems to be surprised.
> Why do we even have the collective fiction where corporate messaging around a sore topic is treated as trustworthy? Especially with companies that we know lie all the fucking time?
Yeah I don't get that. PR people are adversaries, not allies
Clearly we need to setup the Social Media Agency and regulate them. We'll staff the agency with high level political appointments comprised entirely of Twitter and Facebook executives.
It was inconceivable before 9/11 that we would ever have something called "Department of Homeland Security" or be "asked for our papers". Yet, right afterwards, we were given the "Department of Homeland Security" and the "Patriot Act", which has been extended every single time, despite how it has been regularly abused. My children have grown up into adults not knowing anything else. Just wait a short while. If you don't do something, you will live under the "Ministry of Truth".
I can only guess why, but I do know that it’s not new. For example, around 1920, journalists were praising Hershey town as a town without crime even though the town had many incidents (Michael D‘Antonio).
Whenever I see such a corporate communication, my mental process is to immediately imagine the meeting that led to its creation, having attended many such meetings.
Recently I've resorted to explaining various news items to my kids as "well, there would have been a meeting, and their lawyer would have said this... and the marketing person would have said that... and then they tried to figure out how to put out a statement that was true but didn't get them sued..."
Today I asked the public transportation company here to either enforce their mask mandate, or to let it completely be so people who actually care can decide for themselves how big of a risk they are willing to take. By having a mask mandate which is not being enforced many customers might get a false feel of security.
Same goes for Facebook. If people think that everybody is being fact checked and false information content is being taken down after being reported then stuff that doesn't get taken down must be true.
My personal pet theory is that -- whether we admit it or not -- most people are somewhat... spiritual? Humans tend to see and believe in meaning where none exists.
Seeing great injustice like this is just really hard for us, because it's a constant reminder that either:
1. there is no meaning / purpose / higher-power / etc, or
2. we have been forsaken by whatever higher-power there is.
Both of these are uncomfortable, so it's often easier to just subconsciously fall into ignoring the issues and lulling oneself into a bit of happiness. Until something like this happens, then everyone has to act surprised for a bit; lest they admit to #1 or #2 above.
I will jump on this bandwagon and say color me not surprised.
The issue is emblematic of a bigger issue though. General trust in our society is generally down. It stretches beyond the sectors normally understood as BS ( advertising, HR come to mind ), but moved to corrupt just about everything else out there. We are a point, where the only organization that is somewhat trusted is military.
Because we need regulation of new technologies and to address issues that are becoming endemic and long-standing enough that the general populace genuinely understands them. From neonics and bees to misinformation and facebook. But our political class has adopted a policy that demands no action be taken, as that is the official policy position on all free-market related issues for one of the two major political parties.
So popular belief that our society can fix the issues it is presented with drops.
Which means societal trust drops.
And those who are causing said problems, become emboldened.
Giving more power to those abusing it is not going to create trust.
Trust has been broken for all of human history. It’s just much easier for us to notice it now. I don’t know the solution, but more of what we’ve been doing isn’t it.
> Because we need regulation of new technologies and to address issues that are becoming endemic and long-standing enough that the general populace genuinely understands them.
in my view, regulation is not the answer. what's happened, from my perspective, is that only a small handful of platforms have gel'led into place, have come to take over all of our social media world. this is largely via a system of rampant acquisitions/anti-competitive behavior, & enormously high switching costs of leaving any given network.
we need more people engaged & trying to find answers. we need more networks. we need new ways of networking, new ways of moderating, at scale. the current contenders are mostly well over a decade old & have rotted into place, and trying to regulate these vast networks is not going to bring us to a better place. we have to really journey, to better, less dull places, via innovation & competition.
personally i feel like social networks are elemental to freedom of speech & democratic practices in the world today. if we as a public value speech, believe it important for public good, i would like to see funds set up to fund development & running of public good works. we should fund the fediverse, we should fund people trying to build helpful moderation tools; we should practice actively the values that are important to us.
If that's to be changed, who takes the action? The government? Does the government decide what's bad, and what's not? It's an unanswerable question that none agrees on.
Society hasn't worsened meaningfully, this kind of stuff has always existed, but the internet exposes this to everyone, pulling everyone out of their pre-internet bubbles.
And the military is the pawn of a corrupt State Department, national security elite, and military industrial complex, which renders moot the mostly honorable conduct of people in the military.
> the only organization that is somewhat trusted is military
Only in certain quarters. The US military has been subject to deliberate infiltration by both evangelicals (especially Air Force), white supremacists, and proto-insurrectionists for decades. More recently, there has been a seemingly endless parade of officers committing public acts of insubordination, or even incitement to mutiny/sedition. At this point, trust in the military is mostly limited to people who share those agendas.
Which claim do you find surprising? The infiltration has been documented so many times I'd consider it obvious. Here is just one example of the evangelical flavor for you to follow up on.
I'll let you look up the examples of military personnel participating in January 6 yourself. I'm a little wary TBH of being "sea-lioned" about things that are super well known and documented. It's an argument-by-exhaustion tactic I learned to recognize at least twenty years ago.
Thanks for the links. For your first link, I’m not sure how that’s considered infiltration if the ‘infiltrators’ are proudly announcing their beliefs and intentions, and from what I understand as a group have doing so since the 19th century.
If there are actually folks that, for example, pretended to be atheist/agnostic, climbed up the ranks, and suddenly became an evangelical then your worries are more understandable, though I’m not sure if that’s happened.
The info in the second link does seem worrying if true.
For the last link, considering there are thousands if not tens of thousands (?) of Lt. Cols, and equivalent ranks, in the US military, it’s somewhat surprising but not that unusual, or worrying, that there’s public insubordination by a few.
They're not hiding their beliefs, but they are hiding their intentions. When they join the military they take oaths to defend the constitution, which includes separation of church and state. They are strictly forbidden from allowing their religion to affect performance of their duties. When religious groups are actively recruiting people to join the military with prior intent to violate their oaths and regulations, that's still a Very Bad Thing. The fact that the secrecy is not total seems like exactly the kind of sea-lioning quibble I predicted.
> considering there are thousands if not tens of thousands
A little research shows that there are just over 10,000 LTCs in the Army, which is the largest branch so probably fewer in each of the others. But this has gone much higher than that. Another example is Stanley McChrystal, a four star general (there are only 43 of those in the army right now) and director of Special Forces Command, who resigned to avoid being formally charged with insubordination. Do you think his example helped or hurt wrt other officers committing similar acts of insubordination? That it was good or bad for military discipline and national security?
> not that unusual, or worrying, that there’s public insubordination by a few.
That's insane. The military runs on discipline. It's one of its most important, almost defining, features. Yes, this is surprising. There have been very few cases in my lifetime. Yes, it's worrying for a senior officer not only to be insubordinate but to incite others to follow them into mutiny/insurrection. Dismissing it as though military service were no different than being in a chess club is absurd.
‘ There have been very few cases in my lifetime.’ There ought not to be any cases in the ideal world. But of course given human nature, etc., a small rate, say a 0.1% insubordination rate seems quite reasonable. (which even still may be too ambitious unless the new recruits can maintain a very noble character)
There will always be people with less than virtuous intentions climbing up the ranks at any given point in time, at any given organization.
So the military certainly seems to be doing better at separating the wheat from the chaff than the federal bureaucracy or any private organization even 1/10 as large.
Please stop posting unsubstantive comments to HN, and particularly please don't fulminate or post flamebait. It's all against the site guidelines, and not what this site is for. You can make your substantive points without that.
Now that there is concrete evidence that moderators are exempting people from the rules - aka selectively enforcing their own TOS/AUP - does that change their standing and protections?
The distinction lies in whether the service provider has rendered themselves a "publisher" under 230. The protection has historically been broadly interpreted but, in theory, Facebook could lose the protection if it chose, selectively, what content to promote or remove in violation of its own public TOS. Generally:
You have case law for this claim? Or hell, I'll take a quote from your "source" you think supports it.
(that's a trick question: No such case exists. What you say is not the law -- for anyone interested in a more-entertaining version summarizing the state of the law in this area than court decisions and statues, check out https://www.techdirt.com/articles/20200531/23325444617/hello...)
From a discussion of case history provided by your helpful link:
"Generally, courts have said that a service’s ability to control the content that others post on its website is not enough, in and of itself, to make the service provider a content developer."
I have some theories but was hoping someone better informed than me would comment so I could learn more first and come to a more thoughtful position even if it's "not applicable".
But hey, your easily googlable link is useful too.
No. All that s.230 does is declare that platforms are not the 'publisher or speaker' of content provided by another 'information content provider'. It isn't a common carrier provision, so platforms are allowed to make whatever decisions they like about which people they're willing to host, or what TOS/AUP they want to enforce.
For such a simple provision, it's astonishing how many people are writing bad (and sometimes bad-faith) takes on what it means. [Edit:] It's actually absolutely as straightforward as it appears. Which is not to say that it couldn't be changed (and there are reasonable arguments both ways) but confusing what is and what ought to be is a hugely annoying feature of many armchair legal analysts.
Many discussions are explicitly about what S230 ought to be though, not what it is. Most discussions I've seen start out by stating it was made for a 1996 bulletin board and is dated. It's long overdue to handle this blanket immunity that's being abused by social media behemoths.
You don't have to be a lawyer to know something is a bad law and something is being abused.
Sure. And people writing 's.230 allows Facebook to have its cake and eat it, by allowing them to control their content and yet have immunity from responsibility for that which they choose to leave up' have a point. But there's an awful lot of people arguing that this or that moderation decision means that Facebook 'have now moved from being a platform to a publisher' and should be sued. Normally when Facebook have taken down something the commentator agrees with, or have left up something they think is harmful.
s.230 has no platform/publisher trade-off. If you're an intermediary and not the original information provider you are expressly not the speaker or publisher, irrespective of your editorial choices. That's the whole point of the provision. And it's really straightforward. A lot of people seem to want to muddy the waters, and they shouldn't.
As written, nothing changes with this not-revelation, revelation with respect to Section 230. It does recolor some of their statements about consistent treatment and enforcement but those are other matters.
> confusing what is and what ought to be is a hugely annoying feature of many armchair legal analysts
To be fair, the delusion is shared by many, including law itself. If "what ought to be" was the same as "what is," then what form of law would be needed?
In what sense is Facebook not a publisher? Their algorithm acts as an editor, choosing what to show me. If they had a simple chronological feed, then the platform argument would make sense.
If the NYT created a service where the articles I see were selected algorithmically, would they suddenly not be a publisher?
The "publisher" versus "platform" distinction is 100% a made-up distinction to motivate bad §230 takes.
What §230 does, very simply, is say that websites posting user-generated content are not liable for that content, even if they moderate the content. It was passed in response to a pair of court decisions that concluded that a website that moderated content (including, for example, weeding out profanity or pornography) was liable for all content posted, and a website that provided no moderation whatsoever wasn't liable.
I finally looked up the actual text of §230 and it says this:
> No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
So I guess the NYT would be responsible for articles they generate but I suppose they would get a pass for anything they re-publish (like from a wire service).
> So I guess the NYT would be responsible for articles they generate but I suppose they would get a pass for anything they re-publish (like from a wire service).
Exactly. Though, interestingly, the second part of your statement is only true for the online edition. For NYT-on-paper, they're liable for all of it. The same with the comments section: online, it's covered by s.230; offline, the 'letters to the editor' section in print is the responsibility of the paper.
Because s.230 expressly provides for them not to be treated as one. The worry at the time was that information services making editorial decisions (taking down harmful content, in particular) would be treated as publishers, and so liable for what was left up. That creates an obvious moral hazard problem, encouraging bulletin boards and web hosts to refuse to even look at what's being posted, to avoid liability. So s.230 was added to the Communications Decency Act to make clear that the legal responsibility would fall only on those originally providing the information.
This situation isn't mirrored outside the US, FWIW. IIRC England & Wales will impose liability for libels etc., but only if the host had actual or constructive knowledge of the content of the post and chose to let it stay up. That introduces quite a lot of legal uncertainty and a bias towards deleting controversial material but may be better overall. I don't really know.
When a company (or even more generally a social phenomenon) is so big, the only logical consequence is that it becomes embedded in the layer that it services.
In society not everybody is equal, a social movement with the massive scope that Facebook has cannot deviate from such rule.
Power law is a thing, you can't escape it, not even the Universe can.
Then the fact anything is would be wrong, which if believed truly leads the only reasonable behavior to be the most destructive behavior (what is, existence, is bad, so destroying it is good).
In society not everybody is equal, however in societies with rule of law, the idea is that everybody is at least deemed equal under the law. That this unfortunately often fails in practice is very different from explicitly and intentionally having separate rules
If these are known misleading public statements then can SEC prosecute them? I'd think these statements can affect the stock price and this is securities fraud.
They would likely raise a "mere puffery" defense. Our legal system recognizes that in the course of business people will inevitably lie. At least a little. And so puffery, as a matter of law, is immaterial.
The puffery doctrine is quite controversial in some academic circles. Though I'm not sure it's litigated much anymore as a practical matter? At least not when it comes to civil suits alleging securities fraud.
I'm guessing you have just met "prosecutorial discretion" for the first time. Prosecutors have to power to never bring charges against their friends and allies with basically zero risk.
Consider the Jussie Smollett case. Kim Foxx, the Chicago DA (and allegedly close to Smollett) dismissed charges rather than recusing herself and bringing in another prosecutor.
It took widespread outrage to reverse that decision and Foxx still has her job.
Anything smaller than national outrage against a DA is almost always entirely overlooked.
Your next shock will no doubt be about the nature of grand juries. A prosecutor chooses what information to show to the jury. It is perfectly acceptable to leave out incriminating evidence or to leave out vindicating evidence.
A prosecutor can get friends and allies off the hook or punish opponents this way while claiming "the people decided". The whole grand jury system needs to be reworked to ensure it is more equal (or simply done away with).
The SEC doesn't conduct criminal prosecutions. They can only bring civil actions for securities law violations. If they do find evidence of criminality they pass it to the Justice Department for possible prosecution.
> In 2019, it allowed international soccer star Neymar to show nude photos of a woman, who had accused him of rape, to tens of millions of his fans before the content was removed by Facebook. Whitelisted accounts shared inflammatory claims that Facebook’s fact checkers deemed false, including that vaccines are deadly, that Hillary Clinton had covered up “pedophile rings,” and that then-President Donald Trump had called all refugees seeking asylum “animals,” according to the documents.
[...]
> While the program included most government officials, it didn’t include all candidates for public office, at times effectively granting incumbents in elections an advantage over challengers. The discrepancy was most prevalent in state and local races, the documents show, and employees worried Facebook could be subject to accusations of favoritism.
[...]
> In practice, most of the content flagged by the XCheck system faced no subsequent review, the documents show.
[...]
> In addition, Facebook has asked fact-checking partners to retroactively change their findings on posts from high-profile accounts, waived standard punishments for propagating what it classifies as misinformation and even altered planned changes to its algorithms to avoid political fallout.
This is a pretty damning indictment of a platform that 52% of American adults use as a news source. [0] Forget the toxic element FB and platforms like it introduce into social relations, the past several years have shown us the extreme power of misinformation and disinformation to polarize the US as a whole.
Something needs to be done here. The lack of oversight is astonishing. Even just the effect they likely have had on elections by selectively including candidates is a huge disruptive effect to the entire fabric of society. Someone needs to be held accountable.
It is exactly what you would expect from such concentration of power and influence.
What is actually surprising, are the number of people who aren't aware of this fact or more likely find it appealing because the tech giants enforce their point of view most of the time.
It seems like no one has figured out a good system for moderation on the internet.
IIUC, Facebook hired contractors to do it, then realized that that didn't work and created XCheck to cover the visible cases, and is now in trouble because XCheck also doesn't work and rubber-stamps everything. Even before this there were news stories about the horribleness of those contract moderator jobs. Reddit tried to federate moderation, but it's since become clear that all top subreddits are moderated by the same people. Even HN only works because dang busts ass to keep it good, and that has obvious limits (what happens when dang goes on vacation or retires?)
I think part of the problem of "moderation" is exposition, and incentives to maximize user engagement. Posts that nibody sees don't need to be moderated. The problem comes from the fact that platforms offer the most visibility to the worst content, because getting users riled up, excited or upset is the core of their business. It's their only business.
Maybe moderation could be solved by regulating the number of likes or reposts a given user can make or a givzn post can receive. Seems a little far-fetched but worth thinking about.
I don't think universal moderation (a moderation standard across all users) is possible or even desirable.
Different users want different things. There are users who never want a single even mildly insulting word. There are users who want unlimited freedom.
The best you can do is to break down moderation and let people opt into a level and form of moderation. Tell them upfront what they are getting and let them pick (or let them make their own moderation rules that apply clientside).
The common denominator in platforms going to shit is scale.
Most social media platforms get their initial users by targeting a specific niche or demographic. Forums of olde typically revolved around some specific subject matter (e.g a particular game, or band, or subculture). Facebook targeted college students. Reddit targeted techies. But once the platform reaches some critical threshold of popularity, the platform strays from its vision to realize some commercial potential. The admins and moderators, in the interest of growth, try to appeal to a lowest common denominator, which ends up alienating the now-veterans, and the original purpose of the platform is diluted into obscurity.
The only systems that have figured out moderation at scale are Wikipedia and StackExchange. But see what HN thinks about that.
Nobody wants to admit that the only type of moderation that actually works at scale is an entrenched group of somewhat-expert overly-attached users gatekeeping contributions with (what looks like to the novice and sometimes even to the established user) extreme prejudice on a website with intentionally highly limited scope.
StackExchange’s moderators have a huge bias issue against newcomers to the field (some can say it is justified) and sometimes, (though I have only personally noticed this), there is a huge bias against those who can’t speak English well. I have noticed at times, people with high rep make rude remarks as they misunderstand what the original author had to say.
For me, I take time to edit questions with poor grammar and help people solve their problems from time to time.
i think that's the point that the parent is trying to make: the only sites that have "figured out" moderation have erred almost ridiculously on the side of rejecting a ton of content, often in biased ways. the sites that do this generally seem better for it, even if it's not really fair.
moderation that works well is generally not very nice. content moderation that isn't somewhat cruel doesn't work very well.
Sort of, except the difference in scale is important. there's a lot fewer candidates in interviewing, so each false negative is more significant.
The false negatives in the content moderation process are essentially irrelevant to anybody other than the person being moderated because there's enough other content being generated, so there's very little downside to being overzeaous in content moderating. at anything other than the very largest companies, being overzealous in rejecting interviewees will severely limit your talent pool.
Ironically, HN's great moderation caused it to become very popular, which has made the task of moderating it all much more difficult, which is having a noticeable effect on discussions and which articles make it to the front page.
Reddit could change their TOS tomorrow to prevent users from moderating more than 2 subreddits if they wanted; others would take their place. But the mods of subreddits that have not been banned are advertiser friendly.
> It seems like no one has figured out a good system for moderation on the internet.
I use locals.com; lots of small disjointed communities, where posters have to pay (or not) a small fee per month (1$ to 5$). which keep the trolls and influence campaigners away.
> At least some of the documents have been turned over to the Securities and Exchange Commission and to Congress by a person seeking federal whistleblower protection, according to people familiar with the matter.
The story-within-the-story here is that there is a FB whistleblower who wanted to bring this to light, not unlike other high-profile cases involving government surveillance. It amazes me that one person can wield more power than scores of seasoned journalists.
Am I right to think of it more as a partnership? It does take at least one person with insider knowledge and access. Otherwise, the reporting would lack the backing documents that brings it credibility. And companies seem to be rather opaque to purely outside sources.
Unfortunately it's not just that. A few years ago a friend reached out to a few investigative journalists with documents about wrongdoing at a big co. No response. This was at the time of the Mossack Fonseca hack when many of those same publications were calling out individuals who owned offshore companies, even when it was perfectly legal for them to do so. Maybe she was unlucky, but my anecdotal takeaway was that perhaps it simply required less work, and got more clicks than real investigative journalism.
This should have been obvious during the election when trump clearly violated the "don't mislead the public about how elections work" when he claimed that postal votes are what ever it is he said it was.
That is a clear ban. It says so in the "community guidelines"
(side note, you should really read the community guidelines, they are a great set of rules for keeping a community vibrant and happy, assuming they are enforced....)
I can see why facebook did it, you don't want to obviously piss off a capricious party with the power to fuck with your bottom line. It doesn't make it any better.
Huh, never thought I’d see XCheck in a news article. I used to work at Facebook and spotted abuse of this system by bad actors and partly fixed it. It’s still not perfect but it’s better than it used to be.
I think I might have agreed with the author of this article before working in Integrity for a few years. But with time I learned that any system that’s meant to work for millions of users will have some edge cases that need to be papered over. Especially when it’s not a system owned and operated by a handful of people. Here’s an example - as far as I know it’s not possible for Mark Zuckerberg to log in to Facebook on a new device. The system that prevents malicious log in attempts sees so many attempts on his account that it disallows any attempt now. There’s no plans to fix it for him specifically because it works reasonably well for hundreds of millions of other users whose accounts are safeguarded from being compromised. His inconvenience is an edge case.
With XCheck specifically what would happen is that some team working closely on a specific problem in integrity might find a sub population of users being wrongly persecuted by systems built by other teams located in other time zones. They would use XCheck as a means to prevent these users from being penalised by the other systems. It worked reasonably well, but there’s always room for improvement.
I can confirm some of what the article says though. The process for adding shields wasn’t policed internally very well in the past. Like I mentioned, this was being exploited by abusive accounts - if an account was able to verify its identity it would get a “Shielded-ID-Verified” tag applied to it. ID verification was considered to be a strong signal of authenticity. So teams that weren’t related to applying the tag would see the tag and assume the account was authentic. And as I investigated this more I realised no one really “owned” the tag or policed who could apply it and under what circumstances. I closed this particular loop hole.
In later years the XCheck system started being actively maintained by a dedicated team that cared. They looked into problems like these and made it better.
I think it’s valid because it’s an example of a system not working for a small minority while still working well for others. And more pertinently, there’s no plans to fix it for him specifically just because he’s the CEO. It’s better to spend time making the account compromise system better for the vast majority of users instead.
The problem is not account compromise. Nobody is complaining about inability to compromise Zuckerberg's account or it being to hard to register a new device or anything like that. The issue is question is the two-tier (or maybe multiple-tier) system of rules that secretly exists inside Facebook while the public materials falsely claim all users are guided by the same rules.
Or perhaps the underlying business/product model is inherently flawed in a way that's bad for society, all patches have proven woefully insufficient in mitigating that, and Facebook have been intentionally concealing this.
> I think it’s valid because it’s an example of a system not working for a small minority while still working well for others.
That's not what the article mentioned, which is why people are saying you don't seem to understand. The article mentioned there is a system, which does work for a small minority of people, not that it doesn't work for them. It's as if you unintentionally have blinders on and can't see the issue.
It is tone-deaf. You seem to be regaling it as an inconvenience or problem that Zuckerburg has. It's not an inconvenience, it's a personal bodyguard that he alone has. If he changes devices, he has a team to allow him to log into it. How is that relevant?
The rest of us don't get that. If someone hacks our account and changes our password, the majority of us have little hope of really getting FB's attention to help us fix it.
The point about popular accounts getting 10000x the number of abuse reports is much more relevant.
The point isn’t that he gets special protection for logging on on new devices, it’s that he receives thousands of fake login attempts a day. It’s basically the same point: 10000x the number of abuse reports be 10000x the number of login attempts.
No it is not.
What if Trump or Biden could not login to Facebook due to too many failed attempts ? Would they scream that Facebook is blocking them and FB MUST unblock them. Should FB just tell them, "Tough, you can't use Facebook, deal with it?" What if it was Ronaldo or Brad Pitt.
A company will make exemptions for very famous people.
In every case, when something grows too big, you will find that are special cases that must be exempt from the general rules.
There are very few exemption to this. Usually rule of law. Almost anything else has exemption.
Disclaimer. I don't and never worked in Facebook. And I barely use Facebook. BUT I've build big systems that deal with the web and the real world.
That's interesting. Was this because of the volume of mail I/O, for scale/fanout-related performance reasons, or to simply keep responsiveness at p99 for that account "just because VIP"?
I recall twitter having special rules for accounts with a large number of followers, like keeping them on dedicated hardware or DB instances just so they could replicate them differently.
Please consider the power dynamics here. The HN community owes no favors to FB or FB former staff. When a former FB staff member logs on to post the kind of response they posted, community members may stand up and call them out on their trolling. I accept the consequences of my behavior (as my comments arguably violate HN guidelines), and am open to learning more about the FB’s behavior, from good faith commenters on both sides.
The problem seems to be though, that while the company may have tools to detect abuse, if they're choosing selectively when to enforce things it defeats the entire point
You haven’t really touched on the main problem discussed in the article, which is that to Facebook, there are special users - mainly celebrities and politicians - who get to play by different rules than the rest of us. Social media was supposed to help level the playing field of society, not exacerbate its inequalities.
I did touch on that problem. Like I pointed out Zuckerberg can’t log in on new devices anymore. That’s because of the thousands of attempts per second to log into his account. Those attempts happen because he’s a celebrity. His experience is objectively different because of who he is.
It’s the same with Neymar. How many times do you think his profile is reported for any number of violations by people who don’t like him? If an ordinary persons account got 100 reports a minute it would be taken down. Neymar’s won’t be.
I don’t know how every Integrity system could be modified to make an exception for any of these classes of accounts or how to codify it in a way that would seem “fair”. If you have an idea for a better way, you should share it.
I concede that some accounts probably deserve that specific type of protection. However, it doesn’t explain the other kinds of protections these people have, including exemption from content-based rules. Those are the issues of real concern.
The fact that the content couldn’t be taken down was a bug. I feel like you, the author and most critics of Big Tech discount the possibility of bugs existing.
I’m sure you’ve read the old fable of “The Boy Who Cried Wolf.” Facebook has made voluntary decisions at the highest levels, reported over the course of the past decade, to shield certain people from its content rules that apply to the general public. So if it was a bug this time (and I have no reason to believe that it wasn’t), I’m sure you can understand people’s skepticism about it.
Moreover, I assume this bug was reported internally, probably pretty quickly. How long did it take to get fixed? If the fix wasn’t prioritized and corrected within, say, a day (along with a regression test to ensure it never happens again!), then that would be pretty damning of the company’s culture and priorities as well.
But facebook comes across as unwilling and uninterested to fix the bugs in cases like these. Sometimes, even, the bugs themselves seem to surface resulting from fundamentally misstructuring the problem.
I'm not sure what you're point really is. You keep harping on Zuckerberg not being able to log in on new devices but dismiss the entirety of the report and the internal review as "Yeah, well, at scale there's nothing you can do." If that's the case, shut it off. We wouldn't accept that from a manufacturer of a physical product.
> We wouldn't accept that from a manufacturer of a physical product.
Not sure what the correct analog for physical product is, but we accept everything up to and including catastrophic failure resulting in deaths from physical products under the right circumstances, so you're going to have to be much more specific.
"We wouldn't accept that from a manufacturer of a physical product." - why not? I would think that any manufacturer of a physical product is clearly entitled to provide better service or better product versions or better legal conditions or better pricing for some of their customers if they want. And they definitely do so for various VIPs - some people pay to wear Nike shoes, and some people get paid millions to wear Nike shoes, and we do consider that a manufacturer of a physical product has the right to do that.
There is not a trade principle of having to treat all customers equally (the sole exception being not denying service for a specific list of protected groups), and there is a general principle that people can trade with others as they please and provide different conditions for arbitrary reasons.
This may be a naive question, but isn't that a problem with the content reporting system itself? That it requires blanket exceptions?
Popular users are going to have more false-positive reports than others, but when those reports deviate from the norm on individual pieces of content (say a nude photo) then the system should still be able to pick it up. It's an exercise in feature extraction.
The login blanket exception (for the CEO of the company) is a different use case and purpose than content control, one that blanket exceptions can solve efficiently.
>If you have an idea for a better way, you should share it.
I'm certain that the other poster won't present some kind of practical outline because part-way through the brainstorming they will realize they are designing a "Social Credit Score" and become too frustrated to continue.
The solution is simple. Facebook needs at least 100,000 human moderators. Clearly their engineering team isn't able to solve the problems facebook creates.
> Those attempts happen because he’s a celebrity. His experience is objectively different because of who he is.
Then the rules should be different, like Twitter giving a blue check mark. If there are accounts that need to be treated differently then it should be clear why and how the rules are different. Fixing problems with tech (like not allowing Zuckerberg to login) should be the exception.
Twitter caught hell for letting Trump break their rules because they pretended the rules were the same for everyone.
Maybe normal users also shouldn't be able to have their accounts destroyed by a hundred spurious reports.
Currently if you register a Facebook account to manage a business and post ads, it will be banned off and on for weeks, and the recourse suggested to me by a director in the Integrity org was "try posting on facebook like a normal person."
> It’s the same with Neymar. How many times do you think his profile is reported for any number of violations by people who don’t like him? If an ordinary persons account got 100 reports a minute it would be taken down. Neymar’s won’t be.
More to the point, after a human reviewed Neymar's conduct and it clearly violated Facebook policies about posting revenge porn, his account still wasn't taken down. And that's not a technical issue of the false positive rate -- it is a double standard.
From the article: 'A December memo from another Facebook data scientist was blunter: “Facebook routinely makes exceptions for powerful actors.”'
(Edited: changed verb tense to make it clear that this already happened and wasn't a hypothetical.)
> If an ordinary persons account got 100 reports a minute it would be taken down. Neymar’s won’t be.
Why does this disparity exist? By your own account, the number of times this happens for normal users is significantly smaller than for high-profile users, so why is Facebook incapable of having sufficient staffing to deal with this case for all users? _This_ is what has a lot of people annoyed.
> Social media was supposed to help level the playing field of society,
Why do you think this? I mean it isn't like there was a plebiscite on what "social media was supposed to help".
As with most things of consequence in our world, social media is more of an emergent phenomena that any sort of planned effort. We have a legislative system that is there to provide a mechanism to adapt our legal system as needed.
The real underlying issue is that high profile accounts are targeted by groups of users who "report abuse" simply because they don't like that sports team/politician/etc...
High profile accounts cannot work under identical rules or they'd simply all be suspended all the time.
You’re thinking of HotOrNot. A “face book” has been a staple of US universities for decades to help new students identify each other. They were literally printed booklets with people’s faces in it.
I actually disagree that Facebook has consistently been about ranking from the start, I think for a while in the middle it was legitimately a social media platform. But it most certainly started out as that. If you dig a bit into the history of it the primordial version of Facebook was essentially little more than HotOrNot.
I was on Facebook from almost the first day, when it was only open to college students (and the domain name was thefacebook.com). It was definitely not as you describe. There was no ranking of faces.
You can see for yourself by searching for “2005 Facebook screenshots”.
It's hard to imagine now, but at the end of the 20th century, if you weren't employed by a Newspaper or a TV network, about the only way you could let the world know your views was to write a letter to the editor of your local paper. And even if you could somehow make a friend in a far-away land, phone calls and postal mail were expensive. (I'm still sad I lost touch with students I met in Japan, Taiwan, and Indonesia in the 80's)
The internet, and social media in particular, changed all that. First only nerds could make a web pages. Some, like me, published our thoughts there using raw HTML. Then Moveable Type allowed anyone with an FTP site to publish. Then Blogger allowed anyone with a browser to publish. Then Flickr and Facebook and Twitter and all the rest. It was an exciting time.
I hope this help explains why we thought this would "level the playing field." What we read and what we watched was no longer dictated by TV Network bosses or editorial boards. Governments could no longer demonize people in other lands, because we were all free to (for example) be Facebook friends with those people. At least that was the theory. As I mentioned in another comment, it sure hasn't worked out that way.
> It's hard to imagine now, but at the end of the 20th century, if you weren't employed by a Newspaper or a TV network, about the only way you could let the world know your views was to write a letter to the editor of your local paper.
When you compare the profits of newspapers and TV stations to FB, it becomes obvious why so many media mogul billionaires are pushing for harsh regulations for Social Media...
> As I mentioned in another comment, it sure hasn't worked out that way.
This expectation was indeed common in tech circles in the 00’s “web 2.0” days, and it didn’t seem ludicrous. Removing government and corporate gatekeepers (like newspapers and TV networks) meant that disenfranchised voices could finally be heard —anyone could have a blog or whatever and be heard. It wasn’t crazy to think that if only everyone in the world could finally talk to each other that we could work out differences and make friendships across political and geographical boundaries.
That was the hypothesis. The worldwide experiment that is still running seems to have falsified it.
> This expectation was indeed common in tech circles in the 00’s “web 2.0” days, and it didn’t seem ludicrous.
I heard people voice similar expectations about roughly equally ludicrous categories of online services then, but never social media as such. Most of them were ludicrous for reasons that were obvious at the time, and apply equally to social media:
(1) In the short term, the digital divide was acute, and any benefits they brought would naturally increase inequality across that divide.
(2) In the longer term, where one might presume the digital divide would erode, they overlooked that while the services involved were generally still in the venture-subsidized artificially underpriced/undermonetized phase, any plausible business model would either promote inequality by narrowing reach to an elite or promote inequality by creating sharply tiered service or (most commonly) a sharp division between a broad class of users being engaged to be marketed to moneyed interests and the moneyed interests buying their eyeballs. Any prediction of resolving inequality was based on venture subsidies and monopoly building dumping being converted into a permanent state out of charity.
We may then just have different scopes for what counts as "Social Media" and what are other "categories of online services". The wikipedia definition seems good: https://en.wikipedia.org/wiki/Social_media
> Most of them were ludicrous for reasons that were obvious at the time, and apply equally to social media:
It seems post facto to conclude that they were obviously ludicrous at the time. Perhaps with hindsight it seems as ludicrous as a belief in alchemy in the middle ages, but it wasn't obvious that you couldn't turn lead into gold before we had chemistry. In the Web 2.0 era lots of smart people thought social media could make the world better. I raised money and founded a "social media" startup expressly thinking it would empower people, and many of my peers in that world were equally earnest.
Facebook’s own mission statement is “to give people the power to share and to make the world more connected” (emphasis mine). And if you were there when Facebook was founded, as I was, before celebrities and politicians were accommodated by them, you would have felt very empowered indeed.
I think people that work on this feature mean well - or at least they think that they mean well. But as a result, we have a two-tier system where the peasants have one set of rules and the nobility has an entirely different one. It may have started as a hack to correct the obvious inadequacies of the moderation system, but it grew into something much more sinister and alien to the spirit of free speech, and is ripe for capture by ideologically driven partisans (which, in my opinion, has already happened). And once it did, the care that people implementing and maintaining the unjust system have for it isn't exactly a consolation for anybody who encounters it.
You have to remember that high profile accounts get 10000x the number of abuse-reports than a normal account - nearly all bogus. The normal automated moderation functions simply do not work.
Many users will simply mash the "report abuse" button if they see a politician they don't like, or a sports player for an opposing team.
If the normal rules applied identically to everyone, all high profile accounts would simply be inactive in perpetuity.
Maybe a better system would penalize reporters that are found to have reported content that do NOT violent content policies?
Nobody is complaining that Facebook has exceptions for automatically suspending accounts just because people are misreporting them as abusive. The issue is, and has always been, exceptions to the content-based rules which are supposed to apply equally to everyone.
So if hundreds of people reported the content posted by a celebrity, or if a classifier misfired on the content posted by a celebrity, there shouldn’t be any protection for the content?
The content should be evaluated according to the same set of the rules that apply to everyone else. If the content classification rules are properly implemented, no amount of button-smashing should result in a different outcome.
If you know how to build a bug-free system with content classification rules that are properly implemented for 2 billion people in every conceivable circumstance, I think Facebook (or any other big social media company) would pay you a lot of money to implement that. Like, an unbelievably large sum of money.
That's beside the point. If your classification system isn't good enough to use on celebrities, it's not good enough to use on regular people either - bans are just as annoying for them, even if they have less voice to complain.
I'm not sure what the point here is other than that bans are annoying? Also, I don't think it was suggested to use a classification system with no bans?
People aren't objecting to the fact that the rules misclassify people sometimes. They're objecting to the two-tier system that lets celebrities avoid bans but doesn't let regular users do so.
I’m not sure that this is a problem that can be solved solely through software and automation. It’s a business problem. Automation is a means to solve this problem - and the one Facebook uses to save money and operate at scale - but when it comes to making content judgments, people are often better than machines at doing it. “Report” buttons can be an input into the mechanism, but they need not be authoritative.
The problem for Facebook is that adding people is going to be expensive and hurt their bottom line. So they are likely incented to treat these disparities as collateral damage on the road to ever-increasing profits.
So are you saying you could develop the solution but Facebook won't offer you enough? I don't think it makes a difference whether the system is automated or not.
No, I’m saying Facebook could probably solve this problem with the managers they already have, but with different priorities set by top management, and at greater expense.
Sorry but that seems to be understating the issue. Are you saying it's only a matter of changing the entire company's priorities and increasing the budget by large amounts? If it were that easy then it seems any other wealthy company would have solved it by now.
And in any case, have you tried applying at Facebook? If you have the answer to their problems and are capable, then why not?
I’m not particularly interested in solving Facebook’s self-created problems. Fifteen years ago, I was interested in helping them from a tech growth side, but I can’t imagine working there now, especially in light of the company culture. My priorities would clash with those of the existing upper management all the way up to the CEO (who is completely unchecked by the Board, BTW).
As for this particular issue, I consider celebrities and politicians who stir up trouble largely as fat to be cut. Anyone who builds that much gravity and controversy tends to cause more problems than add value, and detracts from democratizing the platform.
Well I think that seems to answer it then -- if nobody who is capable wants to work there for any amount of money, then nothing will change, and the self-inflicted problems will probably get worse. And I believe the smart thing to do from a business perspective would be to severely limit the reach of posts by problem users and then charge them increasing amounts of money to promote posts to the decreasing pool of users who won't block it.
Sure, I'll do that. It's not as hard as you make it sound. Spam classifiers work fine for example. Nobody complains about special exemption lists for the Gmail spam filter because it doesn't need any. The term spam is not precisely defined but a strong social consensus exists on what the word means and that's sufficient for voting on mail streams to work. So I'd just build one of those and collect my big pile of cash.
But that's not the sort of content classification you mean, is it.
So the problem isn't technical. Rather it's an entirely self inflicted problem by Facebook. The cause is an attempt to broker a compromise between the original spirit of the site and internal activists who are bent on controlling information and manipulating opinion. The reason they make so many bad judgement calls is because they're not trying to impose clearly defined rules, but rather, ever shifting and deliberately ill-posed left wing standards like "misinformation" or "hate speech". Because these terms are defined and redefined on a near daily basis by ideological warriors and not some kind of universal social consensus, it is absolutely inevitable that the site ends up like this. A death spiral in which no amount of moderation can ever be sufficient for some of their loudest employees simply cannot be avoided once you cross the Rubicon of arbitrary vague censorship rules.
If Facebook want to reduce the number of embarrassing incidents, they can do it tomorrow. All they have to do is stop trying to label misinformation or auto-classify posts as racist/terrorism-supporting/etc. Stand up and loudly defend the morality of freedom of speech. Refocus their abuse teams on auto-generated commercial spam, like webmail companies do, and leave the rest alone. This isn't hard, they did it in the early days of the site. It may mean firing employees who don't agree, but those employees would never get tired of waging war against opinions they don't like no matter how much was spent on moderation, so it's ultimately futile trying to please them.
I'm sorry, I am having a really hard time understanding this comment. You're suggesting they just apply a email spam filter and that will fix everything? Can you explain how this would work, considering that Facebook is not an email service, and any spam filter would likely face the same issues with abuse that have already been mentioned anyway? Also I really don't follow how your last two paragraphs have anything to do with that, it seems you're saying a lot of unprovable and out-of-context politically motivated things, and then saying they should just give up on the problem entirely and have no moderation whatsoever? Isn't that the whole thing you're trying to solve though? I don't get it. Please avoid the off-topic political comments if you want to make your arguments easy to follow, and just focus on what the solution is and how to get there. Rule of thumb I've noticed: if a comment could be viewed as a political rant, it's probably not going to be very convincing.
Spam filters don't only apply to emails and Facebook already has one, of course, to stop fake account signups and posting of commercial spam. Nonetheless spam filters are a form of content classification. Yes, I'm saying that Facebook should just "give up" and stop trying to do political moderation on top of their pre-existing spam classification. This is not radical. For years they didn't do things like fact check posts or try to auto-classify racism, the site was fine. It was during this period that Facebook grew like crazy and took over the world, so clearly real users were happy with the level and type of content classification during this time.
You ask - isn't moderation the problem they're trying to solve. This gets to the heart of the matter. What is the actual problem Facebook are facing here?
I read the WSJ article and the quotes from the internal documents. To me the articles and documents are discussing this problem: their content moderation is very unreliable (10% error rate according to Zuck) therefore they have created a massive whitelist of famous people who would otherwise get random takedowns which would expose the arbitrary and flaky nature of the system. By their own telling this is unfair, unequal, runs counter to the site's founding principles and has led to bad things like lying to their own oversight board.
It's clear from this thread that some HN posters are reading it and seeing a different problem: content moderation isn't aggressive enough and stupid decisions, like labelling a discussion of paint as racist, should just apply to everyone without exception.
I think the actual problem is better interpreted the first way. Facebook created XCheck because their content moderation is horrifically unreliable. This is not inherent to the nature of automated decision making, as Gmail spam filtering shows - it works fine, is uncontroversial and makes users happy regardless of their politics. Rather, it's inherent to the extremely vague "rules" they're trying to enforce, which aren't really rules at all but rather an attempt to guess what might inflame political activists of various kinds, mostly on the left. But most people aren't activists. If they just rolled back their content policies to what they were seven or eight years ago, the NYT set would go berserk, but most other people wouldn't care much or would actively approve. After all, they didn't care before and Facebook's own documents show that their own userbase is making a sport out of mocking their terrible moderation.
Finally, you edited your comment to add a personal attack claiming I'm making "off topic political comments". There could be nothing more on-topic, as you must know from reading the article. The XCheck system exists to stop celebrities and politicians from being randomly whacked by an out of control auto-moderator system. A major source of controversy inside Facebook was when it was realized that the system was protecting political incumbents simply because they were more famous than their political challengers, thus giving the ruling parties a literal exemption from the rules that applied to other politicians. The political nature of the system and impact of moderation on politics is an overriding theme of concern throughout the documents. You can read the article for free if you sign in - that's how I did it.
> For years they didn't do things like fact check posts or try to auto-classify racism, the site was fine.
I’m not sure that I would consider any site that contains virally-spreading racism and falsehoods to be “fine,” but that’s just me, I guess. Even HN is full of BS, but at least it’s contained to comments and can’t be republished with the click of a button.
Yes, but I'm a classical tech libertarian so my definition of fine is "it made lots of users happy". If you think Facebook is filled with bad content you could just not go there, obviously.
Facebook's problem is that it has given in to the activist class that feels simply not going there is insufficient. Instead they feel a deep need to try and control people through communication and content platforms. This was historically understood to be a bad idea, exactly because the demands of such people are limitless. "Virally spreading falsehoods" is no different to the Chinese government's ban on "rumours", it's a category so vague that Facebook could spend every last cent they have on human moderators and you'd still consider the site to be filled with it. Especially because it's impossible to decide what is and isn't false: that's the root cause of this WSJ story! Their moderators keep disagreeing with each other and making embarrassing decisions, which is why they have a two-tier system where they try and protect famous people who could make a big noise from the worst of their random "falsehood detecting machine".
I'm also baffled by this comment. Wouldn't that same reasoning also apply to moderation, i.e. if you think Facebook's moderation is bad, you should just not go there? How are they "controlling people" if you acknowledge that going there is voluntary? Also what if their moderation is what is making a lot of people happy? I really don't get what your complaint is, please be more clear.
Edit:
>it's impossible to decide what is and isn't false
I can't agree with this, all organizations have to decide at one point what is false and what isn't, otherwise they have nothing to act on. It would be more convincing if you could suggest ways their moderators could resolve these disputes and determine what is actually true, because what you're suggesting sounds to me like they should just decide that everything is false all the time.
"Wouldn't that same reasoning also apply to moderation, i.e. if you think Facebook's moderation is bad, you should just not go there?"
Sure, and I don't! But this thread isn't about my problems or even your problems, it's about Facebook's problems.
"How are they controlling people if you acknowledge that going there is voluntary?"
People go there because they think they are seeing content shared by their friends, by people they follow and so on. In fact they are seeing a heavily filtered version of that designed to manipulate their own opinions. That's the whole point of the filtering: with the exception of where they label articles as 'fact checked', content that is politically unacceptable to them just vanishes and people aren't told it happens, so they can remain unaware of it. Like any system of censorship, right?
"all organizations have to decide at one point what is false and what isn't"
No, they have to decide that for a very narrow range of issues they've chosen to explicitly specialize in, and organizations frequently disagree with each other about what is or is not true despite that. That's the entire point of competition and why competitive economies work better than centrally planned economies: often it's just not clear what is or isn't true, and "trueness" may well be different depending on who is asking, so different firms just have to duke it out in the market. Sometimes an agreement emerges and competitors start copying the leader, sometimes it never does.
Facebook has a disastrously collapsing system here because they are not only trying to decide what's true in the narrow space of social network design, but trying to decide what's true for any possible statement. This is as nonsensical as GOSPLAN was; it can never work. Heck they aren't even a respected authority on the question of what's true in social networking anymore, that's why they had to buy Instagram and WhatsApp. Their competitors beat them. To try and decide what's true for the entire space of arbitrary questions about politics or science is thus arrogance on a galactic scale.
I don't understand what you mean, that's not censorship. To use an analogy, when you tell everyone you don't like Star Wars, your friends will likely decide to only talk to you about Westworld and not talk to you about what happens in The Mandalorian. Censorship would be if someone was actively trying to stop everyone from talking about Star Wars, which is not what is happening. I would advise against using that word without clear proof of it, as it's misused quite often. Also I don't understand why you were previously encouraging spam filtering but now seem to be against any kind of filtering?
>trying to decide what's true for any possible statement
Any possible statement that happens on their platform, yes. That's generally how it works when you have a company and I don't see how Facebook is doing anything out of the ordinary here -- any company can fire employees/customers for lying. If they know there are lies being posted on their site, it's perfectly reasonable to delete them. In fact I wish they would take more effort to delete more lies and falsehoods, the website is unusable when everybody is caught up in discussing a lie and doesn't want to hear the truth.
>To try and decide what's true for the entire space of arbitrary questions about politics or science is thus arrogance on a galactic scale.
I really can't agree. In general, we know what's true for politics: it's what the elections and the courts decide, at least in the US anyway. By design those are the authoritative sources. There is no authoritative source for science but with that you can verify the accuracy of any statement by testing it yourself and reproducing (or not reproducing) the results, that's the whole point. I don't see why it would be arrogant of a company to do this, as it's what a company is supposed to be doing in order for the system to work.
That's just silly. Some facts are incontrovertible. Do you really think there's any room for disagreement that the earth is round, or that water freezes at 0 degrees C?
> One man's racism is another man's plain speaking.
There's no room in this world for racism, and you shouldn't be trying to defend it here - or anywhere.
>For years they didn't do things like fact check posts or try to auto-classify racism, the site was fine
>Gmail spam filtering [...] works fine, is uncontroversial and makes users happy regardless of their politics
>an attempt to guess what might inflame political activists of various kinds, mostly on the left
This is more of what I meant: I find these arguments to be unconvincing, if you're going to make these claims convincingly then you need to show proof of this. Remember we're talking about two billion users here. Your post does not read like an actual attempt to solve the problem but instead an attempt to attack "activists on the left" and "the NYT set" which I don't even know who you're talking about or what that's supposed to mean in this context, I would advise against making these type of statements. It would be more convincing to mention a specific person or persons, what their claims are, and what you disagree with and why.
>Finally, you edited your comment to add a personal attack claiming I'm making "off topic political comments".
This is false, there's no personal attack, I'm saying your comments are unrelated to the argument and are not convincing to me. If you're confused about this, a personal attack would be something along the lines of "X person is a bad person because they follow Y political persuasion" which your comments could be construed as doing about Facebook's employees. So please make sure not to do that. This is again why I highly suggest against making this a political argument, usually it results in someone getting defensive, when there's no reason to do that.
If you want me to put this more frankly: I don't care about your politics or facebook's politics, that's not an interesting subject to me. It's nothing personal against you or facebook. Just talk about the problem please, otherwise I don't want to continue this discussion.
>A major source of controversy inside Facebook was when it was realized that the system was protecting political incumbents simply because they were more famous than their political challengers, thus giving the ruling parties a literal exemption from the rules that applied to other politicians. The political nature of the system and impact of moderation on politics is an overriding theme of concern throughout the documents.
This also seems not really related to the argument. Sure it affects politicians but as has been established, that's a side effect of the way the system has been designed. I don't think it makes a difference whether the side effect was intentional or not.
I really wonder if we read the same article. Large parts of it are about politics and the different ways their system affects different kinds of politics and politicians. I don't understand why you find this somehow irrelevant or compartmentalizable. If you don't care about politics then this whole article and thread about it probably just isn't for you, because the concerns are fundamentally political to their core. In fact the criticism of XCheck is an egalitarian one, that the rules for the famous/ruling classes should be the same as for the plebs.
To flesh this out, look at the examples in the article where moderation decisions split or were reversed - that's the problem XCheck is designed to solve. Most are about politics:
- Saying "white paint colors are the worst" was classed as racism. Trying to define and then scrub all racist speech from the world is a left wing policy.
- A journalist who (we are told) was criticizing Osama bin Laden was classed automatically as supporting him, and then human moderators agreed. Scrubbing this sort of thing is (in the USA) historically either a right wing or bipartisan consensus. We don't know why the moderators agreed because the comments themselves are not presented to us. This was later overridden by higher ups at Facebook.
- Split decisions over Trump's post saying "when the looting starts the shooting starts" that got escalated to Zuckerberg himself.
- "The program included most government officials but didn't include all public candidates for office".
etc. If you try to ignore politics, the entire debate becomes meaningless, because indeed, you would not even perceive any problem with unequal enforcement in the first place.
>Large parts of it are about politics and the different ways their system affects different kinds of politics and politicians
This again would be a side effect. Obviously it's not irrelevant but I have seen no reason to prioritize that over anything else.
>because the concerns are fundamentally political to their core
If by political you mean "needs to be solved with politics" then I can't necessarily agree, this is also a technical problem. To put it another way, it wouldn't just magically get fixed if you elect some new congresspeople or replace Zuckerburg, the new people still have to take additional technical steps to fix the problem. If the solution already exists and is being ignored then I would agree, but I have seen no solution offered in this thread. Instead of trying to present the solution, which I have asked you to do repeatedly, it seems you're still trying to make this a political argument, which I wish you wouldn't do. I don't know how many times I need to say that it's not convincing to me to come at it from that angle. I'm sorry if that seems rude but it's the truth of the matter. If your end goal is to campaign for me to vote for somebody, please stop, I'm not interested to hear it (again nothing personal).
On the rest of your comment: I honestly have no idea what your examples are supposed to mean, why you are making these assumptions or why the political motives or policies of some other parties matters. There will always be some users that take issue with any kind of filtering and it makes no sense to me to prioritize ones who happen to have adopted something as a political position at some arbitrary point in the past. If your issue is "unequal enforcement" then can you please elaborate on how a different kind of filtering would help with any of these examples? Why would a different system result in not needing to step in and reverse controversial decisions? I asked for proof of this a few posts ago and you didn't give any.
A person spending a minute looking at something that has garnered a high number of reports is reasonable instead of just ignoring reports because it's a celebrity. It doesn't have to be automated. Edge cases can be expensive
This is exactly the solution. IF tons of people are reporting Steve Newscaster because he posted a status about how his team won, then he shouldn't become immune to criticism, those people should lose the privilege of having their voice heard.
Just send them a little message saying "Hey, you've falsely reported a bunch of posts/accounts lately, so we're restricting your ability to report content for 30 days." and if they keep doing it, make it permanent.
That might not work. In the eternal September of Facebook there may always be enough new accounts to continuously file false reports against high profile accounts.
Yeah but you can just buy warmed-up verified accounts on SEO marketplaces, often several years old and patiently curated to be blandly inoffensive and ready to be turned into whatever you need.
Still, having to do that greatly reduces the problem. The average person who wants to censor a politician they hate isn't going to spend money to buy an account.
Another potential mitigation might be to put a limit on the number of posts that a single user can flag in a day. At some point, the cost of large-scale content manipulation could be made to exceed the expected gains.
It may even be profitable for Facebook to crack down on this. Every celebrity post that gets illegitimately taken down has potential for showing ads to millions of people.
Here's the thing: I can think of multiple ways this could be abused or controversial, in less than 30 seconds. You can probably too.
Actual hostile actors will have years to find ways to use the rules to create controversy and shut down dissenting posts.
If people learn they get flagged after N spurious reports, they'll start rationing their reports so they don't meet the treshhold; they'll start making inflammatory posts that technically respect the rules to bait false reports. They'll create scandals, eg "Person X said awful thing Y, but when I tried to report them Facebook told me I wasn't allowed to report people anymore. Why does Facebook support Y?"
That's not to say your idea is bad. Just, you're making it sound really easy, when it's a problem Facebook has poured millions into without finding many solutions.
right. very likely they wouldn't be even capable to instruct their faked 60 mill ai-managed profiles 'to like' Zuckerboi's own sentence for that matter ..
> high profile accounts get 10000x the number of abuse-reports
Has anyone considered the possibility that this is a signal from the non-elites that something is wrong? That ignoring this "mass downvote" is the essence of the structural elitism?
Popular users get lots of eyeballs on their content. If an average post will get 1 report per 10k views, a popular post with 10m views will get 1000 reports. It doesn't have to have a deeper meaning.
well, in that case, why not make the metric reports-per-view? if you make the metric a rate then it doesn't matter whether it gets 10k views or 10m views, the question becomes "what % of viewers thought this was worth a report".
The rate can still be (and probably is) higher for high-visibility accounts of course but in the example you gave the rate of reports is the same and the problem is using a naive "10 reports = ban" metric.
Brigading is likely a detectable pattern with enough data. Sure, it'd be hard to distinguish between residents of some chan brigading their enemy and somebody being a target of public shaming due to cancel campaign, but in my eyes it's a feature.
FB knows how many eyeballs there was. Their whole business is counting eyeballs. So they can easily teach whatever robot they use to take eyeball counts into account, and give more weight to "report per view" than to "absolute number of reports".
Facebook is already doing that. It just happens to be making choices about how to do it that maximize its profits while ignoring the voices of most of its users.
I would rather them maximize profits rather than decide to actually take a stance in manipulating society. At least I know where I stand with a greedy corporation.
Sure, but at least they're being money grubbing than having hidden political agendas. I think Facebook could be much more evil if it decided to forego advertising profit.
Think literally selling elections to foreign governments as a business model, but in the open.
In addition, maybe a better system would also increase the effort needed to file a report. Calling in and leaving a voicemail message in response to specific questions, for example.
> You have to remember that high profile accounts get 10000x the number of abuse-reports than a normal account - nearly all bogus. The normal automated moderation functions simply do not work.
You would think with the literal legion of geniuses Facebook has they would have a smarter way of handling reports than simply counting them and taking down content that receives over X reports.
>Maybe a better system would penalize reporters that are found to have reported content that do NOT violent content policies?
This might work if the response to a report wasn't so arbitrary. I've been given bans for using heated language, yet had comments that were just as heated AND making direct threats at people marked as not violating any rules when I made the report.
Let me try to explain it again. Suppose an integrity system has a true positive rate of 99.99%. That would be good enough to deploy right? Except that when applied to millions of accounts, 0.01% is still a massive number of people. This is even worse when those people are unusual in some way. For example they might open conversations with hundreds of strangers a day for good reasons. But their behaviour is so similar to those of abusive accounts that they get penalised.
You might say that maybe 99.99% isn’t good enough and the engineers should try for more 9s. Maybe it’s possible but I don’t know how. If you have ideas on this, please share.
Your concerns about different treatment for some people is valid. But again, their experience is different. For example, if an account or content is reported by hundreds of people it’ll be taken down. After all, there’s no reason for accounts in good standing to lie right? Except celebrities often are at the receiving end of such campaigns. There needs to be exceptions so such rules aren’t exploited in such a manner.
>"After all, there's no reason for accounts in good standing to lie, right?" No reason is different from no good reason.
Also, 99.99% only seems like a high number when we think anecdotally, not statistically. For anything at Facebook's scale, the number of nines should be increased! Because .01% of the actions of three billion people on a single day gives you a city roughly the size of Tampa, Florida (~300,000 people).
Given Facebook's financial resources, it should be no problem to increase the size of the team working on the tool. Like any engineering problem, the problem can be broken down into smaller parts, the edge cases can be caught and/or anticipated, creative solutions can be applied, etc.
If history teaches us anything, it's that all public facing systems will be exploited. Those who design them should anticipate this.
(Thanks, by the way, for posting about your perspective. It looks very different to those of us on the outside.)
> Given Facebook's financial resources, it should be no problem to increase the size of the team working on the tool. Like any engineering problem, the problem can be broken down into smaller parts
And I worked for 4 years on one such small part (internally called UFAC), trying to help potential false positives of such a system.
As for classifier with a true positive rate of 99.99999%, I don't know much but I don't think it's possible. But if there's someone out there who might know, then they should say so.
> You might say that maybe 99.99% isn’t good enough and the engineers should try for more 9s. Maybe it’s possible but I don’t know how. If you have ideas on this, please share.
Hire a large human moderation team. Facebook can afford to. They choose not to.
You're not the only person who has suggested this. Let's think about that for a second. Let's say it takes 6 minutes for a moderation team to review an older account. There's 2 billion accounts, so it'd be good to review all of those. It would take about 200 million hours. Presumably you'd want to re-review positive cases so no moderator has too much power. Additional time. Even if Facebook literally doubled the number of employees, and hired 50000 people overnight, they would still take 2 years to complete the review. But in that time it's possible that previously benign accounts turn abusive.
And then think about the 20 million odd new accounts that are created every day. How long before each of those are reviewed? And what signals will you use to review them? These are mostly empty accounts, so there's not much to go on.
And that's just the problem of aged fake accounts. How about bullying, harassment, nudity, terrorism, CEI and all the other problems?
It's interesting talking to people who say "oh that problem is easy to solve, just do X" without realising that the problem is more complicated than it looks.
They didn’t say that: none of their comments seem to be defending Facebook. They are giving their opinion that human moderation is not a simple solution. I super appreciate nindalf’s comments here. It is a shame that an ex-developer who knows the problem space and is clearly explaining some if the issues is getting flamed by association.
If human moderation won't work. And whatever they're doing now is an unqualified disaster. Then what is the solution?
Oil companies tell us that oil spills and pollution and ruined ecologies and the burning planet are just part of using oil. Sad face.
They're doing their very best to minimize the negatives. They hire the very best lobbyists and memory hole as much as possible and donate to some zoos and greenwash and "recycle".
What more could they possibly do?
Really, what do you expect? Stop using oil?! Please. That's crazy talk.
More seriously, I'm not saying that Facebook is an unmitigated evil, that their biz is the moral equivalent of trafficking (humans, arms, drugs, toxic waste), or that humanity would be better if it had never existed.
I'm only asking why they continue to create a mess that they are incapable of cleaning up, by their own admission.
--
I understand these questions are for The Zuck, The Jack, and their cast of enablers (profiteers) like Thiel, Andreessen, etc.
If you're not morally comparing Facebook to oil companies & toxic polluters, why do you constantly analogize them to Facebook and describe their memory holes & ruined ecologies & the burning planet as if that's comparable to what Facebook is doing? Where does "unqualified disaster" even come from?
Toxic waste companies engaged in the conduct you described and had the impact you described and they were condemned after we found solid evidence that they were doing so. Do you have any argument or evidence whatsoever that Facebook is behaving similarly? If you want to argue the moral equivalency, argue it. Don't spin an evocative narrative about the disasters of the oil industry in the same breath as Facebook's moderation policies then disclaim it "I'm not saying that...".
> It's interesting talking to people who say "oh that problem is easy to solve, just do X" without realising that the problem is more complicated than it looks.
At no point did I state that the solution was easy. My response was to your claim that you do not know of any possible solution, not an easy solution; to wit, you invited input:
> Maybe it’s possible but I don’t know how. If you have ideas on this, please share.
I also don't follow your examples. Why are you tasking this hypothetical team to review all two billion accounts? The main issue at hand seems to be lack of sufficient staffing to review reported accounts. Why not start there?
Facebook can afford to hire 100,000 moderators. That would let them review every account every year. 100,000 * 2000 hours a year is 200 million hours. They don' actually need to do that, so they can have multiple moderators review some accounts instead.
That is (approximately) accounting for two weeks of annual leave, but not budgeting for illness or other factors, I normally go with "roughly 200 work-days" (socialist Europe, here, so I start from a base-line of five weeks of annual leave), that gives 1600 worked hours per person, taking it to 160 million hours. Still, plenty.
> if an account or content is reported by hundreds of people it’ll be taken down.
This is extremely simplistic view which is extremely prone to obvious abuse, I really hope FB does much better than this. With obsessive surveillance that FB collects on pretty much everything it can lay their paws on and then some, they could do much better than just counting abuse reports. Much, much better if they really had some smart people - like behavioral science PhDs they surely can afford and they now surely use to figure out how to better sell ads - to work on it for a couple of years, I'm sure they could arrive at something better than "if it gets more than N reports, shut it down". If they don't, it means they don't care enough.
> After all, there’s no reason for accounts in good standing to lie right?
Of course there is. Politicians lie all the time. MSM lie all the time. Why won't regular people lie all the time? Of course they do, for a myriad of reasons. And since there's no punishment for lying, there's zero incentive for them not to.
> There needs to be exceptions so such rules
No, there need to be better rules. If you rules suck, and you have to do exceptions for the nobles, they still suck for the peasants. Making it easy for friends of Zuckerberg is not fixing it, it's just sweeping it under the carpet and leaving it to rot.
It's not a problem that there's a different technical solution to high-profile users. There's no problem with FB hiring more (or more skilled) moderators for higher-profile users.
The problem is when the rules are not applied evenly, especially when high profile users with greater audience can abuse those rules.
Meh, we already live in a world with one set of rules for the peasants and another for the nobility. Seems like just another area where Facebook reflects the real world.
I agree with the perspective of peasants and nobility being at play...
Remember back in the day when a friend got a new game console and invited you over to check it out with the ideal that you'd get to try it but they only had one controller? Really all they wanted to do was have you come and watch them play, and you sat there until you got bored of never having a chance to engage with it? That is the modern social media experience.
They play with your ability to be visible even to people that follow you for updates on your posts. The only way for non-elites (people deemed worthy of ranking) is to pay for ads, which appear as lower quality "promoted" content.
The model of social media started with everyone on the same playing field, but there are so many dimensions that can be manipulated to keep users thinking that it is functional while these sites change to serving the purpose of generating revenue for partners and paying interests underneath a façade of being fair communities. If you speak out against them, they censor you as well, all behind the scenes.
It's simply better to go back to creating independent sites, and then to hope you get ranked fairly on Google, and that people bookmark you... We become powerless when we allow corporate control of our communication, because governments are not aware nor vigilant of/to the impacts to regulate social sites until it's far too late, and because profit is king in that world over simply doing what is fair and positive. The business model feeds misinformation, chaos, disharmony, and conflict, just like reality TV does now, Why? Because it keeps people glued to their screens.
Even these platforms are terrified of instituting positive changes out of fear of losing their market share and user base... Overnight Twitter, FaceBook, IG, or any of these sites can lose their user base and reach just like Club House and Vine... That has to be said as well.
A big problem is that many accounts that post sensationalized (violent, graphic, sexual) content are really run by people that are building follower accounts to sell later on, as a symptom of heavily limited organic account growth (the ability to get followers) on these platforms... People build accounts by posting wild content and then sell them on the black market to others who start out looking like popular individuals because the accounts come with followers already included. Being successful on social media is no longer about having quality content, it's about how much you pay and how professional your image is... No wonder why class-ism has taken hold on it all.
There are still plenty of ways of maintaining fairness on any of these communities/platforms, and company leadership needs to go back and review the original promises they made to everyone in order to build their current user base (promises that they've all now totally broken) and fix those issues as the primary basis for resetting their flaws and oversight.
Most of your response treats the service and its flaws as an engineering problem, whereas the ramifications in the real world aren't something Facebook gets to absolve itself from. They need to own the problem completely. If they can't solve the issue through engineering, it is their responsibility to hire hundreds of thousands of moderators.
What makes you think that this is unique case? What about people that suddenly come to fame, like viral video subjects?
A simple solution is to disallow logging in from new devices and the attempt being silently dropped so you are not bothered, unless you do some magic like generate one time key to complete the procedure on the new device.
I could think of a lot of people that would find it useful.
Or allow setting up 2FA token (other than mobile) correctly.
Instead what FB does is make it impossible to secure your account because they insist whatever you want you should always be able to recover your password with your phone number.
Years ago when I was still using it (I had reason) I tried to secure it with my Yubico. Unfortunately, it wasn't possible to configure FB to not allow you to log in on a new device without the key.
I understand how the discussion probably went: "Let's make it so that we can score some marketing points but let's not really make it requirement because we will be flooded with requests from people who do not understand they will never be able to log in if they loose the token."
But that's exactly what I want. I have a small fleet of these so it is not possible for me to loose them all but unfortunately most sites that purport to allowing 2FA can't do it well because they either don't allow configuring multiple tokens or if they do, they don't allow really lock your account so it is not possible to log in the next time without the token.
> unfortunately most sites that purport to allowing 2FA can't do it well because they either don't allow configuring multiple tokens or if they do, they don't allow really lock your account so it is not possible to log in the next time without the token.
This is a great point. AFAIK, Google is the only service which allows you to set mandatory U2F login requirements. Does any other service offer this functionality?
Many enterprise apps you can force it or depend on authentication from Google or Azure kind of SSO providers, who have this feature.
Consumer apps try hard not to do hard security unless they are forced to, usually for cost reasons.
Security measures like these create a ton of administrative tickets - check any sysad ticket queue a good chunk is password reset/recovery, in enterprise apps the org sysads are paid to handle this.
In consumer apps, the app company has to manage it, also it lot harder to verify identity of a random user than company employees making it harder to do this kind of support.
A good jarring example is AWS, the amazon.com and AWS did (does?) share authentication stack so some basic 2FA functionality like backup codes is not there for AWS .
Google is better at this because they have for long time also focused on SSO service as a product.
Many companies use Google AD /SSO workspace/suite because third party apps support google login out-of the box free[1], maybe charge for AzureAD/SAML2 and likely not support others at all without customization costs.
[1]It is standard because SMB/mid market companies are more likely to use Google for productivity than Azure/o365 as it is easier to manage albeit with lesser features. Third party apps don't want to expend support time on smaller customers if they can avoid it
People do bring this up on HN a lot. For WebAuthn / U2F the only actual example anybody has is AWS. So that's not an industry problem that's specifically an AWS problem. Unless you have an actual example which isn't AWS?
As to TOTP it's a shared secret, so just clone it. If they allow you to set multiple secrets it would just reduce your overall security because more random guesses work. Also, get WebAuthn instead.
Thanks a lot for posting these details and dealing with the critical replies.
I think that with your background and investment in improving these problems, it will be hard for you to understand the perspective many people have that Facebook is fundamentally rotten at this point. These conflicts arise from FB's core business model. It calls up a torrent of hate speech and misinformation with the right hand while trying to clumsily moderate with the left.
You can hire whole teams to prevent singed fingers or protect certain possessions, but the point of a fire is to burn. If there are no good solutions while maintaining FB's core approach and business model, then it would be better for the world if it were extinguished.
I recall that their approach to that was outsourcing it because I read that article that was going into how some of the employees working for the company doing the moderation were losing their minds by being the mental sewage treatment facility of social media
> It calls up a torrent of hate speech and misinformation with the right hand while trying to clumsily moderate with the left.
Not a Facebook employee (or supporter for that matter), but I'm curious if you consider this an issue of Facebook or of social media in general.
Not saying it's OK for FB because everyone does it, but you generally see the same dynamic of the "torrent of hate speech and misinformation" on Twitter, on Reddit, on Youtube even (personal experience: I have a family member that was radicalized by misinformation on the internet. It was all on Youtube, she had never even used Facebook).
I've noticed that people go a lot harder on Facebook than on other tech companies. I think Facebook's reputation is well deserved, but I do think that reputation should be shared with really all social media in general.
You are totally right. I have seen some posts on some networks that brand themselves as the guardian of free speech. Well if we took them to a Facebook Twitter proportion the same problems will arise very soon. So just may be the problem is with the people too.
If those current big techs were to disappear, would people stop their nonsense and sometimes hated behavior?
I am starting to believe that is not just bad company management or moderation, we definitely have bad actors on those networks too.
Yeah, I generally agree with you. Youtube and Twitter have the same issues, and I think Reddit too. (Does TikTok?) I don't know if the extent is as broad. Or how possible it is to use each platform for its purpose without running into those problems.
added: I remember when Facebook switched from showing a chronological timeline to "the algorithm". That turned out to be a monumental paradigm shift.
More generally, social media may be an amplification of crazy people, but it doesn't create the crazy people (pick the side you don't agree with for a definition I guess).
Like, all of this stuff is sortof vaguely like what happened when the printing press got invented, and when literacy became more widespread. It's people, rather than technology, driving these changes.
> added: I remember when Facebook switched from showing a chronological timeline to "the algorithm". That turned out to be a monumental paradigm shift.
So, assume that 10% of your friends are responsible for 90% of the content. Is it really a better experience to see all of their content in your feed rather than some of their content plus some other people?
Let's not have a false dichotomy between the "chronological firehose" algorithm and the "maximize engagement" algorithm. The problem you cite could be addressed in many ways.
At the very least, you could be transparent about what the algorithm actually is, or even gasp give the user control over it.
Like, neither of us know what FB's ranking model actually optimises for. I suspect that it's engagement rates, but that's just from a user perspective.
> The problem you cite could be addressed in many ways.
>I'm curious if you consider this an issue of Facebook or of social media in general.
I think that a lot of people who had the privilege to be on the Internet before the age of engagement-maximizing algorithms feel that the overall mood was less inflammatory and the prevalence of intentional misinformation much lower. Because these algorithms can reward people who are willing to be misleading or abusive to attract attention, there is a concern that these kinds of misbehavior are encouraged by the application of engagement-maximizing sorting to user-generated content.
Facebook participated in the arms race to develop these algorithms much as the Turks and Hungarians developed the use of single-user firearms (arquebus/matchlock) in warfare, by necessity of competition, but are no less responsible than anyone else. The fundamental challenge is whether these algorithms must be fundamentally modified or restricted to prevent their tendency to support bad actors or whether, as the platform operators seem to be insisting, the problem can be managed by post hoc content moderation without limiting the behavior of the algorithm.
Because there is a wall of trade secrets and a sense of geopolitical competition, regulators have not been able to manage this situation in a manner that yields satisfactory results. So we lament this situation where Facebook appears to be chasing profits at the expense of the political stability of modern democracies, and while Google/Reddit/Twitter may be similarly responsible, this article happens to be about Facebook.
I'm not on Twitter, but I am pretty active on Reddit. I think the manipulation is particularly, egregiously bad on Facebook.
For example, I'm not sure how I got bucketed in the "anti-masker" cohort, but over the past couple weeks I've noticed that I'm pushed a torrent of articles along the lines of "Look at this brave person serving the school board!", showing a person going on a rant about how masks are the work of the devil and how great that is.
I kind of like it, because it's given me a window of "the other side" of my social media bubble. I think the thing Facebook is particularly nefarious at is that, the way the feed works, it makes it seem like of course everyone agrees with you, except that idiot crazy uncle who you eventually have to unfriend, because all you see are seemingly "random" posts that enforce your own viewpoints.
>I kind of like it, because it's given me a window of "the other side" of my social media bubble. I think the thing Facebook is particularly nefarious at is that, the way the feed works, it makes it seem like of course everyone agrees with you, except that idiot crazy uncle who you eventually have to unfriend, because all you see are seemingly "random" posts that enforce your own viewpoints.
True, though I think the same thing applies to Twitter and YouTube. You see what seems like random things from a huge variety of random people that all happen to affirm your worldview.
I believe Jack Dorsey has mentioned some vague plans to try to address the bubble problem. And I believe I may have been temporarily placed into a YouTube beta test where they tried to include videos unrelated to things I've seen before (I seem to recall an explicit notice about this, and a request for me to give feedback). So, I think they're probably trying, kind of. (I don't use Facebook and never have, so can't comment, there.)
reddit, as far as I know, doesn't do user-specific recommendations in their submission sorting (besides what they choose to be default subreddits), so you have to go out of your way to form a little bubble for yourself. Which many people do, but at least they have some awareness that they're explicitly setting it up to be that way.
That said, especially as of the past 5 or so years, most sites out there have skewed pretty unilaterally right or left, and I believe all or nearly all of the default subreddits currently have a pretty homogeneous political stance. It's not as insidious as the recommendation-based per-user bubble formation other sites have, but it might have a similar effect, especially if someone mostly just looks at their front page and doesn't subscribe to more than a few subreddits besides the defaults.
While I find antivax schoolboard patchy-blond-hairdye rage moms to be boring and dull, I’m quite happy for social media platforms to work like that and show you content you want from actual people posting that content. And I don’t think either the “Facebook and Twitter shows you a pure bubble” or “Facebook intentionally elevates internet rage and arguments to mine engagement” are exactly true, partially as they’re contradictory. If you want to see posts you disagree with, which many absolutely do (a significant number of top Reddit subs are r/screenshotsofpeopleidontlikebeingidiotslmao) you see it, and if you don’t you don’t, and if you just want random unmoderated unfiltered content you can have that too.
There's isn't a platonic ideal of social media to argue for/against though. Pandora's box has been opened and we can't unopen it. If Facebook magically disappeared tomorrow, people would still take their phone-computer out of their pockets and send messages, thoughts, pictures, videos to their friends and family. Socializing through multimedia technologies. Where there are platforms that don't have the same problems as Facebook, (but also don't have the same reach as Facebook), what is actually inherent to social media, and what is because it's built on top of late-stage capitalism? Hate speech and radicalization hit an inflection point in the 1940s, hopefully we can avoid the same in the 2040s.
> Huh, never thought I’d see XCheck in a news article.
Is everyone at Facebook this naive? You didn't think a system that creates a secret tier of VIP accounts where the rules (and laws) don't apply while publicly claiming the opposite would end up ... in the news?!?!
One of my favorite things about HN is seeing people come out of the woodwork to raise their hand and say that they worked on a system and give their insight. Thanks for sharing this perspective.
My bet is that he emails someone on his staff one day and says “I want the new iPhone” and the next day he has one and he can do whatever he needs to on it.
This whole idea of “Zuck can’t log in from a new device” is laughable. That’s like saying Biden doesn’t have the keys to Air Force One. He doesn’t need them.
This system also made it impossible for me to ever log in again. It had been a few years since I used FB but some friends tagged me at an event, so I figured what the heck.
I was presented with a system I had never configured, which asked me to contact people I don't know to get them to vouch for me. At the same time my FB profile was blackholed, and my wife and long time actual friends can't even see that I exist anymore. Just some person that astroturfed my name with no content (I have a globally unique name).
So I no longer exist from FB perspective, which made both my decision to not use FB as well as never use any FB products like Occulus much easier.
That's all well and great, but your comment as an insider directly implicates Facebook CEO of perjury by lying to congress. During a hearing he claimed all users were treated equally. This is clearly not the case.
Perjury before congress can result in jail time and I hope he's made an example of.
"In 2019, it allowed international soccer star Neymar to show nude photos of a woman, who had accused him of rape, to tens of millions of his fans before the content was removed by Facebook."
I’m not sure how we can ever expect FB to treat everyone fairly; there isn’t a system on earth which does so, including any judiciary ever. Imagine that I were treated the same as a president or king.
> At least some of the documents have been turned over to the Securities and Exchange Commission and to Congress by a person seeking federal whistleblower protection, according to people familiar with the matter.
Facebook are infamous liars, but journalists keep covering Facebook as if their various PR statements are true
This is like, the 100th time Facebook's public relations has been caught lying about pretty much every god damn topic they have ever addressed in public, but the coverage never, ever changes
Every time it turns out Facebook was deceiving the public, or investing a less-than-reasonable effort into protecting the public, journalists are shocked, shocked that FB would do something like this.
Is it really that bad that they apply slightly different sets of rules to accounts with more notoriety?
For example, do we (as facebook consumers) want newly created accounts with @hotmail email treated the same as a new account with @doj.gov, as the same as a Celebrity with a million followers?
Do we want the same set of rules for a suspected Russian troll account to be applied to a major politician? (well..some here might, but I don't).
I think as your account age, status and popularity grows, you should be given *some* flexibility under the rules. Imagine a points system behind the scenes, where bad things get you points, and other things remove points. At a certain point threshold you are banned, suspended, etc.
The system is not simply based on notoriety, as some kind of aggregate of follower count or likes, which would be sane and fair step in the right direction. But rather on a case by case basis, where according to the article "whitelist status was granted with little record of who had granted it and why, according to the 2019 audit.".
It easily ends up being a case of "im a moderator at facebook, and i like this person, and i put them on xcheck". Terrible ofcourse.
The larger problem at hand is that companies like Facebook are given such a gigantic power over discourse and politics because of their gatekeeping. We would often laugh at policies in China which bans people talking about Tiananmen Square, while seeing more or less the same happen in the west about our own controversial issues.
[sarcasm not directed at you]
But it's ok. In the west companies are doing this, and companies are allowed to do business with whoever they want. It is not censorship therefore.
[/sarcasm not directed at you]
> The system is not simply based on notoriety, as some kind of aggregate of follower count or likes, which would be sane and fair step in the right direction. But rather on a case by case basis, where according to the article "whitelist status was granted with little record of who had granted it and why, according to the 2019 audit.".
This was my takeaway as well. I 100% agree rules cannot be applied evenly across every user. A person sharing posts with their 300 "friends" and someone blasting messages at their millions of "followers" are frankly engaging in completely different experiences. The regular person might expect none of their comments to ever get reported and any report could be cause for something actually bad. A popular politician on the other hand might see every single thing they post reported a ton every single time.
And rather than applying rules based on say the reach (which Facebook knows) or any other metric, it seems that they just chucked people into the special people list and that's that. The article stated there are millions on that list. A catch all for all the people who are having the greatest impact seemingly. The fact that the list had considerations for potential blowback to FB is even worse. I get that in percentage terms of 2.8 billion users a multimillion person list is in outlier territory by most measures, but that group is also wildly influential and thus shouldn't be in the "too weird" category.
I'm not even opposed to a general whitelist, some people (like a President of the US) truly are gonna be really weird to apply any broader ruleset to. But a free for all and catch all bucket for anyone of "notoriety" is really bad. It should be a very special remedy that is not done lightly. The article made it seem like the policy for this particular remedy was non-existent.
Part of me thinks the solution is just to cap it. If the central conceit is "connecting people" then no person realistically knows more than say 10,000 people and shouldn't need the microphone scaled to global proportions. That'd never happen, but it seems like a root answer.
Yes. In both directions: 'Low-profile' advertisers (millions of dollars per year is still low-profile) meanwhile get the nonsensical AI rejections and shadowbans
This is the reality of our new modern world that spans far beyond Facebook.
It is the popular, chosen, and paying that succeed on a dramatically increasing level, bolstered online by code and algorithms. This is how ideological and sales monopoly is bolstered and protected with promotion, paywalls, and glass ceilings.
It is no wonder why wealth inequality is surging worldwide as it's becoming more impossible to find wealth for everyone but those chosen and those in control...
The very minute Facebook and Twitter changed timelines to non-linear formats, and when Reddit began to hide downvotes, the ruse began... I subconsciously doubt that we can really trust analytics properly anymore to be honest, because there now always seems to be a hidden agenda in IT now to suit a specific purpose like profit, messaging, promotion, or subversion.
I'm no crusader, but the ideal that anyone can find success on their own merits and hard work is a lie in the digital world especially now, and a lot of the people that look like successful business minds are in reality just wealthy starters that are losing money they started out with while "portraying themselves" as successful and self made...
The few platform providers/controllers are the only ones making new money, this is why they have so much disposable cash that they spend on wasteful things like flying penis shaped objects outside of the atmosphere, and on failed ideas like triangular shaped pick up trucks...
Talent, wisdom, track-record, and accomplishment are being overlooked for Sensationalism, wealth, and popularity at an all time high in my opinion... This unreasonable and harmful hypocrisy is a critically bad trend that is being promoted, made popular, and deceptively normalized by social media sites as engineered hype.
No wonder why it's driving people to do bad things to hop onto the popularity train... :/
Strong opsec that the supporting documents are actual photos of a computer screen from the visible moire (or somehow altered to look that way).
After numerous leaks, Facebook's internal security team became very good at identifying leakers. The person responsible for this 2016 post was identified within hours and terminated the next day: https://www.buzzfeednews.com/article/blakemontgomery/mark-zu.... The leaker was easily identified by the names of friends liking the post (that and part of their name was visible).
Facebook-issued laptops are filled with spyware, monitoring everything down to the system call level, and practically every access to internal systems is logged at a fine level. The only way to exfiltrate data with plausible deniability would be to photograph the screen with an individually owned device. The fact that you searched for the internal wiki page and viewed it are nothing, but that you shortly invoked the keyboard shortcut for a screen capture, then inserted a USB drive, and copied a file ("Screen shot ____.png" even!) to it (all logged) ... congratulations, you're caught.
Sean Hannity promoted gun sales on his FB. When reported they said it did not violate their rules. When appealed they upheld that original decision. He was given an exemption. Now I get it.
"Turns out managing communication for nearly the entire human race is hard! And requires a lot of people"
Yeah, no shit! You say that like Facebook didn't completely strong-arm their way into wanting to become the sole communication provider for the entire world. They struck deals and started entire projects making sure telcos in emerging markets would only get priority to Facebook, and Wikipedia, and basically no other competitors. They tried to convince the Indian government to make access to Facebook 100% free of charge until backlashed pushed them out.
If they didn't want to become the mediator of all human communication, that's fine, they can stop at any time. But they themselves chose to put themselves here as the messenger, growth-at-all-costs style, and that means they get to deal with the consequences. It's not like they stumbled into this problem by accident.
I can't feel pity for a company that wanted world domination, achieved it, and is now stuck with the issues. Turns out when you're the communication provider for 3 billion users, you get the problems of your 3 billion users! You can't say they didn't know that going in...
Not to mention, imagine the hubris necessary to want this as a company, and consider oneself the sole (or few of) moderator of human communication to police and curate absolutely everything that gets shared, according to a single prescribed narrative by their political partners, and their team of fact checkers complete with their implicit biases, who are funded by extreme partisan players who won the Elite Olympics™ with their nearly infinite wealth and sway.
It's honestly pretty easy to imagine. Given any company with a clarified goal, the default answer to "How much of the market do we want?" is "All of it." Rare is the company that sets as a goal "10% of the potential customers."
I can't blame them though. Networks of any kind have the particular effect of working the better, the more people are on it. What use is a messenger (cough Signal, Threema) if none of your contacts use them and can be persuaded to install yet-another-network app?
The obvious solution would have been for governments to mandate federation with open standards (e.g. XMPP) early on, but unfortunately most politics decision-makers are dinosaurs who won't understand "federation", much less "API"...
That’s because they don’t have to pay for externalities. If Facebook was to be held liable, criminally or financially, for the negative impacts of their service, they wouldn’t be viable as a business (and nor would anything else trying to operate at that scale). It’s only because they can dump the waste products of their enterprise into our collective psychological water supply without consequence that it appears like a profitable business.
How do you quantify externalities of media? Do NYT and other progressive outlets share responsibility for looting during summer 2020 because they largely shared the views of demonstrators? If climate change anxiety is shown to be a large factor in psychological issues, do outlets covering climate change get held responsible for that psychological consequence?
I actually don't think it is that rare. Especially for smaller companies. I don't want 100% of the market. I want 10%, but I want the best 20% that can easily afford my product and doesn't complain, cancel, chargeback, or need unnecessary handholding to use my software. Beyond that it's mostly diminishing returns.
Of course, I actually have a product that people want to pay for, I'm not giving it away for free and selling ads to fund it. If I was, I might be more interested in 80% of the market and still try to discourage that worst 20% from using it.
I enjoy working for a company that realizes we'll never have the entire market and is fine with the fact that there is plenty of space for multiple competitors to do well.
I've been at companies where "we want to be number 1, win all the awards, and defeat the competition" and I get where that comes from... a certain amount of that is healthy, but it can also get very twisted. I'd rather not.
That's not true at all. Plenty of people run lifestyle businesses whose goal is to support themselves and their families, friends, and employees. One example is Aquarium Co-Op [1] who is into promoting the aquarium hobby in all its facets. Cory, the owner, is very frank about the costs of running his business and how he often sells products at or below cost because he believes they're important to the community. A far cry from big tech companies seemingly bent on world domination.
I think that's excellent and I wish Cory the best. Cragislist is another example I can think of where a company just set out to be small and charge what it needed to make its small employee-base comfortable, not dominate the world.
... but then I'm out of names, is my point. Most companies don't go this road. Perhaps that's because going the other road gathers enough resources to be big enough to make all the headlines, so we don't hear about the others.
I think you're generally going to run out if you try to think of household names. Instead, think local. I don't know about you but there are a lot of local family-run restaurants, bakeries, chocolatiers, furniture/carpentry stores, flower shops, consulting businesses, accounting/tax prep firms, game/hobby stores, boardgame cafes, independent movie theatres, etc. On and on the list goes, though unless you live here (Waterloo) you've probably never heard of any of them. And of course where I live is not special; you can find a similar variety in any city and even many towns or villages.
And the size of this problem, managing the entire human race, is part of why they make so much money. They are in front of a lot of people and extracting value from them.
So it seems that they want to keep the profits while not paying the costs to be in this business. While Facebook does bring some benefits to people, it also does a lot of harm, to the point that I'm not sure it's a net positive to the world (or even if it's possible to have a net positive company in this space).
> "Turns out managing communication for nearly the entire human race is hard! And requires a lot of people"
Who are you quoting? Not me, I didn't say that. You've constructed a strawman and then replied to that.
> I can't feel pity
No one asked for pity. I merely pointed out a couple of ways in which the system can fail. It's failed in the past and will fail in the future, hopefully less.
I feel like you really wanted to get all of this off your chest and then replied to me because the comment was high up.
I read your comment a bit like a defense of Facebook, saying "you know, when you scale to millions of people, it's really hard and expensive! Cut them a bit of slack"
I don't disagree that these problems are hard when you scale to millions of users. Just pointing out that Facebook were the ones who chose to scale to millions of users before their moderation systems were ready.
For what it's worth, thank you for doing your part in helping to fix the system.
Please don't fulminate on HN or take threads on generic-indignant tangents. Those are much less interesting and usually turn nasty. Substantive critique is welcome, but if you reply to a comment you should really be replying to something specific in that comment, and you definitely shouldn't be blasting the other user.
I got banned yesterday for quoting a nazi official on propaganda:
“Propaganda must facilitate the displacement of aggression by specifying the targets for hatred.”
- Joseph Goebbels
Yeah. Facebook fucking sucks.
Oh, the reason? Encourages danger and violence (which in their broad definition now includes quoting individuals who were associated to dangerous regimes). Welcome to the day and age in which another organization now decides you are dangerous and censors your presence from the internet independent of what you say.
I was banned from Facebook market place for selling a computer with a bad bitcoin joke in the description. Because the computer had an RTX3090 in it (surplus company PC) I wrote something like "Good for making dinosaurs launder money for you if you're into mining".
Instant perm-ban. No appeal process possible. I'm too dangerous to be allowed to sell things. But I see literal insanity and extreme racism that is A-OK apparently. Just need to be careful who your target is, as per your quote.
Let’s be fair here; quoting Goebbels is never a good idea unless you’re writing a history book or a research paper. If you don’t understand why, the problem is you, not Facebook.
The thing about what Facebook is doing is that it may be impossible. And if it's not impossible, it may be a profoundly bad idea.
Facebook is trying to build a social network with 100% reach and a userbase beholden to a globally uniform set of rules (where possible; the laws of individual nations will forever intervene). This is not something that has ever succeeded. We don't actually know, apriori, whether you can govern the whole of humanity under one set of norms. It's never been done.
It's possible it fundamentally can't be done... That the end result of this experiment is that Facebook fractures and ends up either having to vend multiple views of its userbase with different rules (like Reddit) or has a large chunk of the human populace it can never get on-board. But we should keep in mind what the goal is.
Another week, another detailed account of the hubristic sociopathic amoral culture that defines this firm.
If you work there, you should quit; if you do business with them, you should stop; if you rely on them for your social network, you should find another mechanism to stay in touch.
Irredeemable, and a direct willful enabler of the memetic war that is currently destroying the west.
It is extremely distressing that a few companies, all located in the same country, and many located in the same state, if not quite the same city, run by a few men with similar politics has the immense power that they do.
Some people do not quite understand why.
Imagine in they were all Russian and owned by friends of Putin.
It should be a matter of national security for other nations to develop regional competitors. It should also be in the inrest of the US to have more internal competition.
This is not easy.
Making Facebook 3.0 is not the problem, though running at Facebook scale is a challenge.
It is how to get users to adopt it.
As has been the problem for US upstarts as well.
Thankfully a lot of younger people want other social network platforms than Facebook, but it is a bit pointless when the owners of the new networks are the same people who own the old ones.
I guess TikTok proves it can be done.
It also proved how upset US social media gets when competition knocks on their doors.
China, by locking out competition, developed products that are used domestically. There are drawbacks and negative effects of that as well.
I dont know how big VK is in Russia.
Ideally Facebook, Google, Twitter, YouTube, Instagram would be broken into 250 or so different companies spread out among different nations and regions.
The history of media companies suggests all of these companies will become less influential in little time. A few people used to own radio stations. Then it was TV stations. Overlapping them was newspapers/magazines. Things come and go, they tend to concentrate power, but they don't seem to last very long. Facebook, Twitter, Instagram and Youtube are young. There is little indication they'll have much influence in 20 years.
They won't get broken up because western govs want to compete with China and have similar control while still being able to fly under the freedom flag. FB is a virtual China, "soft authoritarianism" basically.
Indeed. I can see the logic for the US - accept that these guys milk your own citizens a bit in exchange for bringing in loads of cash from abroad plus massive spying and influence capabilities wrt the rest of the world. But why GB, EU etc go along with it is beyond me. Seems like a strategic blunder.
Having a bad credit can really cause you a lot , I had a very bad credit of 460 but it was later fixed by DOOMHACKS@BK.RU , They boosted my credit to 810 in all three bureaus .. They made me believe in credit hacks again after i have been ripped off several times trying to get my credit fixed .. Reach out to them if you need yours fixed as well +1 812 509 3064
Technically, all these users are getting Facebook services for free. Is the complaint here that some people are receiving lesser free services than other people's free services?
If that's the case then we need to look at the value of said free services.
Outside of the Facebook issue, can you ever really automate solutions for managing society-scale interactions while still being fair to people?
If you happen to become a similar edge case to a celebrity but actually adding a fix to the problem you also suffer bumps into corporate budgetary restrictions (you’re not worth it but the celebrity is so the solution is to just add them to a no mod whitelist while you suffer), is that fair? What are the social and societal consequences of this?
depends on your definition of 'automate'. If you mean by computer software, perhaps not, but it begs the question of why one would presume that automation to be necessary. If you just mean 'software' (not necessarily computer software), then yes, rule of law is software that has scaled to manage society scale interactions and run for hundreds of years. Whether it is 'fair' is still debatable, but it is at least more legitimate than FB's system.
I don't believe that you can manually moderate platforms at this scale. Even with the hybrid model of some manual moderation, I've read a lot of articles on the content they're forced to deal with crushing their mental health.
Add to that politics intervening threatening regulation unless they're allowed to tilt the scales. I can't even imagine how endlessly complicated that gets.
I've been debating this question internally for a while and also the question of algorithms. Subjectively speaking it seems to me that it's just as herculean of a task to come up with an algorithm that provides quality content to just about any user. I've tried some of these "alt-tech" sites/servies that claim to have no feed algos other than popularity and even the ones that have a few quality channels on them have their front pages flooded with all sorts of cooky, weird, bad material.
This is not a defense of Facebook (or Twitter), just some thoughts I had for a while now.
Well, legislation + policing + prosecution + courts of law with due process are precisely that - manual moderation of social behaviour at massive scale (admittedly not billions, but hundreds of millions of individuals). Yes, endlessly complicated, yes, costs a fortune, yes, riddled with flaws, yes, it becomes a political problem, yes can be very emotionally challenging (I know because my wife is a criminal public defender. A bad day for me is when a sprint is late. A bad day for her is literal rape, torture, murder, injustice. Definitely puts things in perspective. But she knew what to expect, has a balanced workload, is well compensated and believes it is for a greater good. If facebook is structuring things in a way that people are being crushed - I've read those articles too and don't doubt it - shame on them). But 'it's hard work' isn't a very good excuse.
To be clear - I understand your point and sort of agree with it from a software engineering perspective; I just don't think that's necessarily the right perspective here. Every day we demand that governments and companies in all sorts of different industries do lots of hard, complicated, not-easily-computer-programmable, bug-riddled human processes requiring subjective judgement because they need to be done.
People & organisations aren't entitled to only do the things that need to be done which they can do with near-zero marginal cost with computer software - when they can do that, good for them. When they can't, tough.
Yes, I agree, from personal experience designing good Recommendation Systems is hard. That's a field I'm interested in too - out of curiosity what alt-tech sites have you tried? Would be interested in also trying them and exchanging notes.
I find it hard to disagree with any or that! Admittedly I was just thinking user experience, false positives and programming when I mulled it over in my head.
I tried Locals, Gab (in their less controversial, earlier life while they still had an app store app), Parler, Bitchute, Rumble, Minds, MeWe.
I generally found that the experience is the exact opposite of Big Tech where you often get lost in the information presented to you and very often it’s low quality or annoying unless you go there to specifically watch or read or see something.
I have been better trying to understand how the world will change under China's hegemony. To that end, I have been studying Neo-Confucianism. That being adaptations of Confucianism, the ancient Chinese philosophy of governance, to the 21st century. Facebook's policy seems very neo-Confucian.
"Moreover, Jiang rejects the Western concept of 'equality,' an idea that propagates liberal democracy. From the Confucian point of view, people are unequal—as they differ in virtue, intelligence, knowledge, ability, etc. Hence, it is not plausible to give everyone equal rights without considering their standings. Also, while every individual should be bounded by the law, this does not mean that everyone should have equal legal rights or obligations."[1]
They should have this a paid-for feature. If people care and want humans to look over their posts, etc, then they can pay $10/month or more. Then it won't be just celebrities it could be anyone. The program could pay for itself.
Accounts and pages that have the blue checkmark on Facebook are exempted to all of its rules. The reviewers do not take down blatantly hateful and discriminatory content when these are posted by verified accounts.
this is totally reasonable and arguably necessary for a platform. High profile users are going to be targeted for reporting much more, and the consequences of a false suspension are more visible for them. Think of it like how you might whitelist people who you trust from auto moderation or rate limits on Reddit or some other site if you trust them because the auto mod makes mistakes or you want to give them more options. Obviously FB and Twitters moderation sucks, but that isn’t related to this directly
https://seekingalpha.com/news/3739118-facebook-exempted-secr...
reply