Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

On one hand I worry about this power being abused, but it's hard for me to side with the people being cancelled because they're so repugnant.

It is hard to stand up for the rights of the bigots, the white supremacists, the science deniers, the propagandists, and the misogynists. People who spent decades denying other people access and are now finding the shoe on the other foot.

Can we find some good examples of people who were "cancelled" who were not peddling conspiracy theories or pontificating about why the white race is the natural rulers of humanity?

In some ways this may be seen as a return to the media culture in the Fairness Doctrine era. It used to be that media companies had to seriously consider the viewpoints expressed and wouldn't give crackpots a voice very often. The Internet changed that, allowing everybody a voice regardless of how crazy they might be. Now we're turning back to a more metered experience as it seems that unlimited amplifying the wingnuts is actually damaging to the country.

Nobody likes the censor, but they're a necessary evil. Without them trolls will always take over the conversation once the number of participants exceeds a fairly low threshold. Trolls can drive out honest participants, but honest participants can't drive out trolls. Moderation is necessary. It doesn't have to be third party moderation, upvote/downvote systems can do the job although they're tricky to get right and can be gamed.



sort by: page size:

Moderation of a public forum is censorship

Beyond things that are broadly considered reprehensible (e.g. videos of rape), I'm not convinced that moderation/deplatforming is healthy for society.

I definitely get the "my company - my rules" POV. The First Amendment doesn't apply to private spaces.

Then again, I'm not convinced that moderators ought to have the power to control what sorts of ideas are interrogated publicly (e.g. in the company of strangers). It's particularly problematic when the social climate at the companies that control these platforms is synonymous with the Progressive echochamber, but it's also hard to imagine any body that would be qualified to police ideas.

One of the worst things to happen this decade is the rise of social echochambers, where groups of people intellectually isolate themselves amongst likeminded people, writing-off dissenters as evil strawmen. The groupthink in these echochambers is nudged further to the extremes by the loudmouthed activists who control what ideas people are comfortable expressing. Large groups isolating themselves in these constantly drifting echochambers leads to the problems we saw this week in Washington. I fear making the platforms synonymous with these echochambers will only make this problem worse.

Anonymity and free expression have historically been some of the best qualities of the Internet. Anyone can say - and think - anything, and share those thoughts with everyone. That's always included snark, parody, and similar absurdities. People don't always mean what they say - they often might not even know _if_ they mean what they say.

Those same qualities make it particularly vulnerable to misinformation and conspiracy theories.

Figuring out how to help people understand what's likely to be true without censoring the kinds of ideas people can openly express is one of the great challenges of our time. I really hope we solve it.


Problem is: what constitutes acceptable moderation?

Much of the “cancelling” going on now is strained deliberate misinterpretation of intentions/statements of opponents, a power play for a political purge.

Problem is: how to specify what moderation is acceptable for Apple et al, without making it abundantly clear how biased/bigoted/unfair the demands are?


I think Reddit is a terrible example. The moderators are volunteers, the rules and their application seem entirely arbitrary, and there is no way to opt out.

The key point the author of the article makes is the difference between moderation and censorship: you can opt-in to see moderated content, but you're unilaterally prevented from seeing censored content.

What Reddit does (removing posts, comments, banning accounts) falls under the definition of censorship here -- within the platform itself, obviously.


You're going to use Reddit as an example, where the people least deserving of power are given it, freely, and end up banning people for merely having different opinions?

I would not have used that in my list.

It doesn't boil down to a "concession I don't prefer". The only thing you moderate are those things deemed illegal by the Federal government, anything else is fair game, which brings us a pretty large umbrella of discussion in which to engage. Some people don't want to engage in those discussions because they're hard and/or unpleasant. They not required to, and can freely leave them and ignore them. Allowing whiners who can't stand that people may have a controversial opinion that's at odds with their beliefs to have any power whatsoever is a fast track to failure on multiple levels.

People have block tools; they need to learn to use them.


You are correct, but this is one of the few cases where it's unjustified. We have moved to online discourse more and more. Do we really want super powerful AI and armies of moderators combing through all of that?

On the other hand, a moderator can also use this. Bad times for diverse speech and civil society in general.

The flaw with that is moderators and companies have to make a determination for speech they disagree with. Not all sites ban people in the same way for violent threats, and similarly all sites view inflammatory or derogatory speech in different ways.

At some point in the equation there will be a value judgment made in terms of what breaks the rules. People banned by that moment will cite censorship and demand to be heard (see: the various subreddits banned by Reddit) while people wanting that content removed will celebrate. Making the argument into one purely about censorship ends up removing the nuance and reasoning for why someone was banned, which is why when people talk about conservative voices being banned by twitter, they often ignore the damage and harm Alex Jones for example has been responsible for to many families involved in school shootings or the various conspiracies he peddles.


That's a salient point. Biased, uneven, nontransparent moderation was always happening with weak justification, it was just done by unaccountable people and propagandized as righteous.

Remember people getting banned for saying "learn to code", disallowing certain news links even in DMs, shadowbanning undesirables with no transparency.


In the early 00s forums dealt with “trolls” as we labeled them. Moderation was used to keep things on topic and deal with people who did not add to the community, made threats and really made being a moderator annoying. Total free speech gives you 4chan or kiwi farms. There needs to be a balance of content moderation to keep off literal hate speech and threats to prevent that. Twitter would not be a billion+ dollar company with out moderation. So if Elon wants to make the process more open I am all for that but really any one who has dealt with the cesspool forums and social networks can become knows content moderation is needed and moderators are human in the end. Sometimes you simply can’t have a conversation with a troll to correct the behavior.

I somewhat agree. I think it's important to not censor anyone.

However, it is necessary to curb bad-faith actors: users who don't engage civilly and get in the way or overwhelm other types of civil discourse.

Disconnecting between instances should be reserved to such extreme cases. Otherwise, (if there are too many bad-faith actors in a single instance), you should leave to individual users.

Moderation is an important concept. How do you promote a healthy atmosphere? Downvotes are a kind of crowd-sourced moderation that seems to work well. In a federated context, you could assign weights to votes of different instances (i.e. trusted instances get some weight, untrusted instances carry none).


They're not censoring people, they're moderating based on predefined rules. Without moderation, the "town square" would be overrun with people with proverbial megaphones and an agenda, drowning out literally everyone else with something to say or share.

All online platforms need to have moderators to prevent things like spam and illegal content. That doesn't mean that they can't also uphold the idea of free speech principles though.

Musk really went crazy with cutting staff, but I'm not sure if it was because he wrongly thought he needed to cut the moderation teams in order to support free speech or if he just did it because moderation is expensive.


You can apply that argument to any kind of moderation. The core problem is not that hate speech is censored, the problem is about who is responsible for applying the rules and how these people are selected.

Hacker News is also censored and moderated, but it is my impression that the mods are quite carefully selected. On Reddit, the mods are anonymous and external, and are elected in an opaque process which I don't have a good understanding of. I fear that you end up with the catch 22 also seen in homeowner's associations: The people who have the time and dedication to rise to the top in these kinds of organizations are exactly the people you don't want to be in charge.


I'm not sure this is necessarily an unavoidable fate for internet sites, even communities. You can keep the signal to noise ratio decent if you've got an active team of moderators, fairly heavy pruning of content that doesn't add anything to the conversation and the willingness to say no, my site doesn't offer 'freedom of speech'.

Unfortunately, most big sites seem to be built on the 'leave it as a complete free for all with no moderation whatsoever' model, which immediately stops working the minute the memberbase expands from early adopters to every Tom, Dick and Harry. That's why Reddit, Twitter, etc have had so many problems. They were built with the idea of 'anyone can say virtually anything' in mind, then realised it would lead to massive amounts of trolling and personal attacks and questionable content, then dug themselves an even deeper hole by trying to moderate the site in woefully biased ways.


I reckon discussion used to be moderated by the elites. Ideas too radical (anti-vaxx, etc) would not have received widespread coverage, people who are too extreme would not either. Editors and network owners could apply prejudice and moderation. This is all gone now, instead we have people flocking into extremes in endless feedback loops (some intentional, some psychological, some algorithmic).

I'm sure new mechanisms for self moderation will grow out of this. The generation growing into these platforms as their only known reality will need to take the lead and set up new rules which would allow society to keep existing. This is a work in progress.


I think a lot of people find it comforting. Internet forums admittedly attract bullies in the form of trolls, astroturfers and people who are just plain mean. I agree in principle with a moderation system, but to have a secret moderation system, with no transparency and no recourse is susceptible to corruption and abuse, like any secret system is.

At any rate, those who find it comforting that 'big brother' is keeping the comments safe for them will stay around, and those with dissenting views will be banned to oblivion and eventually leave, making the community a small selection of people who both agree with the philosophy, and find it uncomfortable to deal with views that challenge their philosophy.

This is very similar to what happened in Academia in the past 30 years, and succeeded in pushing out all of the interesting, eccentric and creative people, leaving only those who knew the party line, and moderated their speech accordingly. I think this is not healthy for creativity, but then, that's just my opinion, and obviously the forum owner/moderators disagree.


Internet moderators should be about removing trolls. However, it has turned into down voting and silencing opposing viewpoints and political opinions.

The end results is even more trollish behavior because many feel they can't even state an opinion without being silenced. Reddit is a good example of this.


I'm afraid that Bluesky's approach to moderation is one of the worst possible ways to do it. When the internet was first starting to be mass adopted in the 90s, there was optimism that having access to such a huge variety of viewpoints would expand people's horizons. Instead, what seems to have happened is that people could find the group of people that shared their exact viewpoint, and ignore every other viewpoint. Bluesky's "choose your own moderators" approach gives a lot of power to the mods to set those viewpoints and block out anything else. I'm not sure what the solution is to getting people's viewpoints widened again (it's certainly not Twitter or YouTube's approach of throwing controversial content at you and letting you find the people in the comments that explains why your side of the controversy is right and everyone else's is wrong), but this isn't it. Ultimately I see Bluesky's technical aspects as mostly social signalling to the appropriate groups of Twitter power users rather than a fundamental shift in social media.
next

Legal | privacy