I'm arguing from an assumption that social media giants are too powerful--they can steer public opinion and even influence national elections (think "Social Dilemma"). I also think they're just bad for our social fabric--they seem to drive up anxiety and antisocial behavior. This is the context for the rest of my comment.
That said, I think you're right that without moderation, the quality of these platforms will suffer. I don't think it will look like Parler et al but rather like Mastodon which is more noisy than vile. But even still, I think it will drive people from these platforms, weakening them, and I think that's a net improvement over the status quo.
I also think there's another interesting possibility which may or may not be practical for other reasons: require social media companies to implement a common protocol (e.g., the mastodon protocol; forgetting what it's called off the top of my head) such that these companies can continue to offer a moderated window into the underlying social network, but they don't own the social network. If you want, you can pack up and leave for another social media provider without leaving your friends and conversations. This weakens social media giants by breaking their monopoly and allowing competition from upstarts (including other revenue models besides ads) and it also allows these giants to moderate how they like.
Yeah ok, we agree. I don't think smaller companies would moderate better, just that it would be less of a concern when they do it badly. I mean there are small social medias companies right now that don't moderate at all, but it's not a pressing issue because they don't have much impact.
We all know what no content moderation spirals down into, and that content moderation is a very hard problem to address ("solve" is definitely the wrong word IMO). I am torn between the idea that a company should be able decide how they want to moderate their own platform, and the contradictory feeling that I don't trust them.
Thinking about the idea of legislation to prevent platforms from all moderation doesn't feel right to me personally, so to those that do hold that opinion I am curious:
1. If we can't even regulate ISP's as dumb pipe utilities, then how would it be possible to do it with social media platforms? Would it be contradictory to legislate platforms this way while _not_ also applying the same logic to ISP's?
2. Emerging social media platforms that specifically don't moderate, or decide to moderate _differently_, are in a unique position to benefit from the many users who don't like the major platforms' moderation practices. If these competing platforms suddenly gain in popularity, would there still be a need for regulation if users have a real choice?
Social media giants are like tobacco companies in the 1950/60s.
If the next president ran on a platform of regulating social media as the harmful and addictive product it is, people would probably be receptive. Everyone knows they have an addiction but without collective action the spell can't be broken.
No social media company is going to regulate themselves, but if there was industry wide regulation they could probably find some peace of mind knowing they can implement more healthy practices without fear of a competitor undercutting them.
I think the main problem here is that social media companies tend to have a near-monopoly, thanks to network effects. Readers want to be where the writers are, and writers want to be where the readers are. This puts the companies in a position of too much power. We may not shed any tears over the banning of racists making death threats, but things have moved on from that stage. Personally, I sure wish that Sci-Hub's twitter account hadn't been banned. Furthermore, people who value their privacy are de-facto banned from these platforms, since the platforms have chosen a model where people pay for the product in the form of their personal data and viewing of advertisements. The same service minus the ads and personal info gathering could be provided for a relatively small fee, but the above-mentioned network effects ensure that no competitors with this model will be able to survive.
I imagine that soon the conversation will turn to regulation. One possible path is to limit what kinds of things can be banned on the largest of the platforms. I can only see that path leading to a big political mess, where no one is happy in the end, and the folks who care about privacy are still screwed. I think the more promising path is to get rid of the monopoly aspect. Which means somehow getting rid of network effects. So, even if you use a Facebook competitor, you should still be able to friend people on Facebook, send them messages, read what they write, and they should be able to read what you write.
In other words, running a social media company should just mean defining a protocol, or implementing an existing one. The relevant regulations would say the following:
- The protocol must be public, with all details published online.
- Advance notice must be given of changes to the protocol, so that competitors have time to modify their code, and regulators have time to verify that the new version of the protocol is still legal. Only 1 set of changes can be pending at a time.
- The protocol should only require folks to transfer information that is absolutely required to make to protocol work. So Facebook can't require you to send them your entire private message history in order to talk with someone on their platform. (They can still spy on their own users as much as they want, though.)
- Moderation is still allowed, but companies must apply the same moderation rules to their customers and competitors' customers. Facebook can still censor whatever posts they don't like, but if they ban all accounts originating from their competitors, even the innocuous ones, they'll have anti-trust knocking on their door.
- Ranking algorithms determining what people see in their feed, and any other code that mediates how users interact should also treat all users equally. (It can treat high-karma users differently from low-karma users, of course, but it must be possible for users from competitors to become high-karma.)
- These rules only apply to sufficiently large social media protocols (say, those used by over 10% of population in the country where these regulations are implemented). If you're just running a small Q&A forum for people who use your product, you can do whatever you want. (The rationale for this is that these rules are intended to prevent formation of monopolies. For protocols without tons of users, this is no problem.)
Absolutely there is a problem. That was my point. If we fix the monopolization of these industries, we solve the moderation issue as well via the free market.
Obviously solving the kind of monopolies created by social networks is hard. The best proposals I've heard is forcing them to open up their social graphs/APIs to competitors, but that's not without its own issues (e.g. bad actors siphoning off user data, like Cambridge Analytica).
There's a bold assumption in your position that the regulators charged with wielding the power would do so with greater restraint or less paternalism than the platforms do now themselves.
What I really would expect to result from regulating the big social media players is that you'd simply institutionalize their dominant position. Regulatory regimes tend to discourage upstarts and often do little to reign in incumbents. After 2008 when increased regulation came after those bad "too-big-to-fail" banks... did we end up with smaller, but more diverse banking institutions or even fewer bigger banks that all looked pretty much the same? I could very well be wrong, but my sense is that we didn't move solve that problem and I don't think regulated big social media would be much better than what we have today.
I have zero faith that any new social media ecosystem that emerges in the absence of Facebook would be more secure, or more private -- especially if the new ecosystem consists of multiple independent services sharing data (and if it doesn't turn out that way, that means a new monopoly emerged).
A single large company is also easier to regulate than a bunch of smaller ones.
If you are concerned about monopoly power over social media, go ahead and break up Facebook. If you are concerned about privacy and the security of social media data, you are probably better off regulating Facebook and raising high barriers to anyone who wants access to Facebook's data.
It's not easy to make a new platform. it's insanely difficult. These big social media companies have huge moats at this point, due to network effects and scale.
When other things have such powerful network effects we force them, through the state, to treat people fairly. For instance, utilities like water and electricity are natural monopolies because of the limited physical space for the infrastructure. Social networks aren't quite the same but there are powerful similarities.
I think social media is a real negative force in the world and welcome this as potentially a way to make it better, or at least destroy Twitter and remove one outlet.
I don’t think public companies can reform social media because it’s too profitable. As a private company it may be possible to reform it to still be profitable, but not in a way that harms people.
There wouldn't be a problem if there were three or four competitors on each social media platform. Instead we have near monopolies on status sharing (facebook), opinion sharing (twitter), video sharing (youtube), photo sharing (instagram), resumee sharing (linkedin), etc. If these platforms would decouple from their brands and allow competition, nobody would ask for censorship.
The problem is social media is inherently anti competitive. It relies on network effects, so you have to either limit their size or use utility style regulation.
Strong disagree here. Network effects are king here, and social media is a natural mono/oligopoly. Overcoming this requires the construction of a superior and more addictive product, and the incentives just aren't there for such platforms.
The smart move is to hit them with the hammer of regulation here. They can either be the small exclusive club with tight control over their members and their behavior, or the mass market app with minimal control over their userbase.
The alternative is to have the full cyberpunk future, where "I like <wrongthink>" means being cut off from everything. Banks, search, email. We're already seeing the slope, and it's pretty damn slippery.
So instead of the company bearing the cost of proper moderation we as a society are perfectly happy to let them offload the cost as an externality on all the rest of us? Before social media existed the world got along just fine - if these companies are causing problems then they need to address those problems or they can go back to not existing... both seem like perfectly reasonable options to me.
I think we let them get too big and now the discussion will very much revolve around the capital, value, industries and "jobs" that will be affected by a FB or social media ban or some other legislation.
You can't even effectively "mobilize" the populace for some sort of popular movement against social media because most of that sort of thing these days can only happen on social media. It's not even like we can have a revolution with 50k people and storm FB headquarters and "physically" stop the behemoth. It's headless for the most part. And good luck getting 50k people even. Pretty much All the platforms will stop you before you get anywhere because of "threats of violence".
Isn't the free-market answer to this problem for users to move to other social media platforms that moderate in a way they prefer? The problem here is how powerful and walled-off Facebook and Twitter are, making competing difficult if not impossible. Perhaps if we solve that problem, everyone can get what they want.
I think saying that, on principle, companies should not moderate content at all is equally absurd as it would allow malware, CP, abusive content, and spam to run rampant. All we're really arguing about here is to what extent do we want these platforms to moderate content. Should they be limited to only removing illegal content? What's the line on "illegal" (no company could afford to consult lawyers for every post they remove)? What about spam, which is not necessarily illegal but disruptive to the service?
Furthermore, if Facebook or Twitter are so large that they function as the new public space, the answer isn't to prevent them from banning people, but to break them up. The issue isn't a private company determining who gets to be on their platform, but that their platform is effectively a monopoly and monopolies are bad.
I've run a niche "social media" site for 12 years. There is no tightrope. The problem is that these large platforms want the benefits of scale, without being responsible for dealing with the problems of scale. It is imminently possible to build a thriving social media site that enforces standards of behavior. HN is a good example. However, it is work to actually enforce those standards. It takes leadership. Facebook, Twitter, etc, are not interested in providing that type of leadership, and would likely require both a huge economic investment, and a significant change in business strategy.
Social media above a certain size should be treated as common carriers. There are no reasonable alternatives to them, and today, common public activities activities that are core to our life are conducted on these large platforms. This is not a matter of protecting the freedom of companies to do as they wish - we already regulate companies and restrict their activities in many ways. Private power utilities cannot discriminate against their customers based on their speech or political viewpoints, for example. The same regulations can be enacted to govern these platforms.
I often see arguments saying that someone who is deplatformed/demonetized on these service can just use an alternate service, but I find that to not be the case in practice. Consider that Twitter, Facebook, and YouTube have more users than virtually all nations. Their network effects are core to what the product is, which is why there aren't suitable alternatives (especially when they enact censorship in unison). Telling someone to just go use a different platform is like telling someone that they don't need their power utility, since they can just stick a windmill on their property instead.
Finally, I am greatly concerned that these large privately-controlled platforms are essentially outsourcing government-driven censorship and also violating election laws. For example when conservatives did form their own platform on Parler, AOC called for the Apple and Google app stores to ban Parler after the Jan 6 capitol riot (https://greenwald.substack.com/p/how-silicon-valley-in-a-sho...). If a sitting member of the government pressures private organizations to censor others, it should be considered a violation of the first amendment. Leaving aside the technicalities of law, it is unethical and immoral even otherwise and completely in conflict with classically liberal values. Actions taken by these companies to suppress certain political speech in this manner also amount to a donation to the other side. This isn't recognized as "campaign funding" but it is probably more effective than campaign funding at this point. We need to do a better job of recognizing the gifts-in-kind coming out of Silicon Valley tech companies towards political parties based on the ideas they suppress/amplify/etc.
That said, I think you're right that without moderation, the quality of these platforms will suffer. I don't think it will look like Parler et al but rather like Mastodon which is more noisy than vile. But even still, I think it will drive people from these platforms, weakening them, and I think that's a net improvement over the status quo.
I also think there's another interesting possibility which may or may not be practical for other reasons: require social media companies to implement a common protocol (e.g., the mastodon protocol; forgetting what it's called off the top of my head) such that these companies can continue to offer a moderated window into the underlying social network, but they don't own the social network. If you want, you can pack up and leave for another social media provider without leaving your friends and conversations. This weakens social media giants by breaking their monopoly and allowing competition from upstarts (including other revenue models besides ads) and it also allows these giants to moderate how they like.
reply