And that would be fine, social media companies don't have to promote "objectionable" content in curated public feeds, but people who deliberately subscribed to or are looking for that content should still be able to see it (like Trump's feed). That's the step people are objecting to, so the analogy you draw just isn't relevant.
Or maybe social media companies get rid of any kind of algorithmic selection. Feeds are completely ordered by date. You can follow/unfollow people who you like or dislike. If you don't want to be offended by jackasses, don't go on the public feeds.
That said, you've shot down a lot of people's arguments but I haven't seen you promote a sensible alternative. Given your personal belief framework and given the first amendment, what do you see as a solution to this problem?
I've seen no advocates of saying that individuals shouldn't be able to curate their own feeds, only that social media platforms shouldn't be restricting those feeds for them.
For instance, if you decide that you want a Twitter feed that excludes porn, gore, racism, and other objectionable content, then you absolutely should be able to exclude those (I'd reckon that that'd be a very sensible default). If I want to go observe the crazy bigoted things that fringe groups are spewing, or if I want to use Twitter just as an endless feed of porn, then that doesn't affect your ability to not see those things.
Likewise, I've not heard Musk propose banning any of his critics or opposing viewpoints (though I don't really follow his actions, so it's possible I've just missed them).
Says who? We're seeing some negative side effects to certain kinds of content flowing through social networks, and perhaps more importantly, from the tech company's perspective, politicians do not actually want them to sit back and not touch this content. If they did that, they'd likely end up getting regulated.
Also, a truly unmoderated social network would quickly become a clearing house for garbage content nobody wants to look at.
I get what you're saying, but it's ironic that the social media companies argue against free speech on their "private" platforms when they ban or moderate users.
> Social media companies may no longer promote or suppress content; they can only provide tools to let users do so themselves (eg filter/block/subscribe/tag)
Users don’t want the responsibility of filtering out CP, gore, sexual violence, etc. I would bet the average user actively wants that content suppressed. Just look at any of the cases of social media moderators developing PTSD from their work.
If social media companies don't want to be used as transmission vectors of obvious foreign political influence, then they don't have to be. Just like if they don't want to be used as transmission vectors of pornography or any other type of content, then they don't have to be. In both cases it requires setting some sort of content standard and making decisions about whether the content meets that standard.
What you're describing is really just maintaining your own bubble in social media. This is something that most people think is a bad thing, on both sides of the spectrum. You're not talking about speech which is generally felt to not be permissible in public.
To give an example, this would like saying Elon Musk could simply block the twitter account that follows his private jet, rather than persuade the account to stop. It's not really the same thing.
Honestly this explanation makes it sound worse to me. Your objection doesn't appear to be with these social media companies. It seems to be with the very nature of public communication. You want the benefit of publicly communication without facing any repercussions for what you said in public. And in order to help ensure you don't face repercussions, you are polluting the shared public spaces for everyone else and forcing these social media companies to clean up after you.
Every public platform can be externally monitored and archived. If you don't accept that, don't participate in public platforms.
The whole point is that this sign on yard analogy is just not relevant to real world circumstance. Facebook and Twitter have outsized control over how I socialize with others.
If every major cell phone provider chose to stop delivering any text with that article attached to it, would that be acceptable? What if they decided to unilaterally stop delivering any texts from the Trump campaign?
Clearly there is a line to be drawn, and falling back on "it's their private property and thus their right to filter it however they want" is not sufficient in the 21st century, in the age of platform companies and extreme corporate consolidation.
> It was garbage, and filtering out garbage is an important function
Garbage? It was new, previously unreported information. I voted for Joe Biden, and I still found it of interest - and it did appear to contradict some things Biden had said publicly.
I don't think that argument applies here. The article isn't advocating censorship or legal restrictions on social media. Instead, it's saying that reducing friction from a very specific content-sharing workflow and giving it a prominent position in the UI of social media sites may not have been a good design decision.
Edit: I intended this as a reply to a different comment, but it has a reply now so I'll leave it as-is.
Which is why social media networks should make the far easier rule that they will only ban illegal content. If people want something censored, make it illegal. If people don't like they things are being censored, take it up with their local representative.
If social media wishes to be immune from lawsuits regarding the content it publishes, section 203, then it should not have the ability to censor such content for an explicit commercial revenue model.
I understand why people hate that opinion, because they want civil discourse and nearly free access to media online. Uncensored content pushes normal people out.
Those things are great, but are ultimately out of alignment with a commercial revenue model. The result is social media imposing an algorithmic bias to increase engagement, the results of which are often toxic, and that toxicity ultimately pushes normal people out anyways. You can’t have your cake and eat it too.
And yet the popular social media sites have far more moderation than classical platforms, which are not regarded as causing problem. You mean low-moderation platforms tend to have objectionable content. True, but I don't see the problem.
That seems like an abuse of political power though. That policy would result in limiting exposure of views that are controversial or disagreeable to progressives, since Facebook/Twitter/Reddit/etc. clearly have a highly progressive employee base and moderation policies built around that worldview. Effectively, isolating exposure of some content in this way would tilt the political scales by only allowing other competing (un-isolated) views to benefit from network effects. That to me feels a lot like propaganda and it doesn't make it any more acceptable or holier that it is done by a domestic tech giant rather than a foreign state power.
It's perfectly fine, from a legal standpoint. My personal opinion regarding the ethics of the situation is mainly a reflection of that - in the bizarro world where social media companies need to synthesize content to drive wholesome engagement, it's entirely their right to do so. When you sign up for any of these services, you acknowledge that your uploaded content and online experience is completely subject to their will.
The ethics of all this have already blown over. Twitter and Facebook have both been proven to shadowban controversial figures, and no ethical panic ensued. People don't care, they want a Tik-Tok style feed of sugar water and sweet dreams fed to them on an IV drip. If the next step is fabricating fake content (again, considering how many real content creators exist this seems unlikely), then there is no ethical discussion. It's as fake as the AI-corrected profile-picture you shot on iPhone. It's as real as your approximate location being tracked by Walmart WiFi and then processed into shopping habit data to sell to competitors. If you use Facebook or Twitter (or arguably any social media) then you're already being manipulated. They don't need to promote fake content to make you think what they want.
> A new social media that says, "we're not removing anything unless it's clearly illegal or we get a court order to do so" is a reasonable pitch in my mind.
I agree there is demand for a non-moderated social network, but I'm pretty sure this isn't it. From their own term's of service:
> users may not “disparage, tarnish, or otherwise harm, in our opinion, us and/or the Site.”
Excellent point. Sadly this will likely not happen. Google and Facebook are gradually imposing a highly censored world view.
We can’t effectively reason about social media censorship by analogizing it to meatspace.
Just as virality mechanisms amplify “good” content, they also change the equilibrium state of whose ideas can spread.
Censorship programs work like a squirt of insecticide into a beehive, they kill off some functional parts of the hive organism and eventually lead to the destruction of all of the bees inside.
Similarly, censoring “fake news” or “abusive content” poisons the environment for ideas in a similar way. Ideas are not evaluated objectively. We each react to them emotionally and experience them via the social meaning we associate with them. Prejudices are not rational and are countered by ample evidence, yet there is much prejudice and much religious faith... such beliefs carry a meaning payload enjoyed by their host.
The fear suffered by social media firms (like reddit) is that someone will screenshot a piece of offensive speech alongside an ad paid for by an advertiser, and the advertiser will worry that his/her brand is being associated with that speech.
This is why Colin Kaepernick doesn’t have a job.
Social media firms are thus following in the path of other media firms and are (like The NY Times) putting ad revenue ahead of the first amendment. This isn’t unexpected, these firms are the “tories” of our day.
I think the line of reasoning is more that these social media companies could end up just doing the dirty work of banning/censoring the content themselves, and politicians for whom the site’s bias is advantageous would then flock there. One may also speculate that this could lead to backdoor corruption between social media companies and politicians.
reply