Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

I think people that work on this feature mean well - or at least they think that they mean well. But as a result, we have a two-tier system where the peasants have one set of rules and the nobility has an entirely different one. It may have started as a hack to correct the obvious inadequacies of the moderation system, but it grew into something much more sinister and alien to the spirit of free speech, and is ripe for capture by ideologically driven partisans (which, in my opinion, has already happened). And once it did, the care that people implementing and maintaining the unjust system have for it isn't exactly a consolation for anybody who encounters it.


sort by: page size:

This is a great point, because I think it shows that people who are trying to redefine the contours of the standard free speech conversation (which had historically been about freedom from government imposed limits) are attempting to solve the wrong problem.

The problem is dependence on huge monolithic tech platforms, such that we feel that the societal value of free expression is threatened by the standard forms moderation that we had accepted as normal for most of the history of the internet.

Now, the ability to moderate carries a significant amount of power in a way that didn't use to be the case. And some people, looking over the available options and the effort required to pursue them, would now prefer to resign themselves to permanent dependence on huge monolithic platforms and just focus on making sure those platforms are not engaging in moderation.

But this is hugely problematic for a number of reasons, not the least of which is that moderation at least sometimes offers a genuine good to users because toxic, manipulative, illegal behaviors that undermine the platform are removed. There's a "societal value" in having flourishing discourse that isn't compromised by the toxicity that would naturally occur without moderation. And there is good reason to not want to make that sacrifice.

Either side offers a distasteful sacrifice. So maybe the answer is to try and bypass the dilemma by making non-centralized communication more accessible, so that people like your wife (or my mom, or my dad's entire side of the family) don't have to make that kind of choice.


Yeah, I don't think of a fair moderation system as censorship. Groups of people should have the freedom to keep a discussion focused on whatever they desire.

Still, it's disappointing just how easy it is for us to be completely blind to things that go against our own preferences, even when they're blindingly obvious to others.


No, it's definitely the other way around. Some people enjoy the power trip of censoring someone they disagree with.

The perfect system of rules for moderation would be one you'd comfortably handle out to your ideological adversaries to enforce. If the outcome is still unfair, there's something wrong with the rules.

Something like "I cut, you choose".


Not to diminish the technical accomplishment here, but that sounds like the very definition of tyranny of the majority.

Not to say that our current systems of manual moderation are perfect, but at least it's a step removed from mob rule, with recourse usually available if a video is wrongly flagged for simply expressing unpopular opinion.

If a video tries to express unpopular viewpoints at DTube, it would seem impossible for it to ever get listed again without waiting for the tides of public opinion to turn, which could take much longer if these kinds of moderation systems become prevalent and the majority could silence the voices of the minority so directly.

Decentralized moderation is a hard problem, and I don't claim to have a good solution myself, but I'd rather take centralized services over decentralized ones with moderation implemented as rule by majority with no recourse. That seems like a dangerously slippery slope towards dystopia.


One man's nastiness is another's joke, or opinion. What people don't understand is that if you are ok with the idea of a so called "moderation" that's more on the limiting side, some other group of moderators will come later and apply he same rules but from the different side, and your sides ideas won't see the light of day. It's a very simple concept that people have forgotten why freedom of expression is the first amendment. The problem imo is that it doesn't translate well to online communities that are open to anyone. I'm sure people have thought about this endlessly before, but I don't know if there's any progress.

That seems to be the gift and curse of all decentralized systems. There really isn't a clearly defined red line between freedom of speech and moderation and I don't think there will ever be. Is it even possible to have a protocol or app mandate what is permissible content and what is not? And that is made even worse (imo) when humans come into play, as everyone's opinions and biases also affect the decision.

That's a salient point. Biased, uneven, nontransparent moderation was always happening with weak justification, it was just done by unaccountable people and propagandized as righteous.

Remember people getting banned for saying "learn to code", disallowing certain news links even in DMs, shadowbanning undesirables with no transparency.


That's an interesting dilemma between free speech and moderation.

I'd also say the current system is better, but losing the dislike button and pushing for positive sentiment can lead to fake positivity which we should beware.

Reminds of the anime "Avatar", where there's an episode featuring a character with an outrageous fake smile who turned out to be sort of a political oppressor.


People have been trying to automate moderation and make it perfectly fair for a long time now. It's basically impossible. If you do it at all, you have to use people, which makes it suck.

If you don't do any moderation, you get a sewer, and then you get shut down. You can try to exclude only illegal stuff and stand on free speech, but in terms of content you still have a sewer. To be fair, most of the stuff that moves through sewers is not deadly poison, just crap and water. Some people like sewers, so sewer cities continue to be built for people to live in, but most people find them unattractive, so they never take over the world.

To make something more attractive out of all the crap people emit, you need a modicum of moderation, which always comes from other people. Which means it is essentially unfair, capricious, and creates incentives to endless mediocrity and pandering. But even if we accept that rather than going back to the sewers, one man's perfect moderator is the other man's little Hitler. So maybe we should just crowdsource moderation! Brilliant! But "democratic" crowd moderation is essentially a way of diffusing responsibility for censorship. The censorship of whoever happens to brigade your posts (maybe with sockpuppets, or external links) is not any more free, or pleasant, than any other censorship. Meanwhile, upvoting and downvoting takes on a massive life of its own and creates arenas for endless meta-gaming like subcommunities trying to push each other out. This takes over the site. If you manage the site, have fun managing all that effectively, and try not to get sucked in.

If what you want is fairness, all you can do is publish explicit policy (NOT legal ToS) so people know and agree to what they're getting into - then enforce that policy universally to the letter. That will be fair only in the limited sense that everyone agrees to it the same, like the rules of chess. But human-based moderation will never be fair.


I'd argue that this has been creeping for a while. At first it's used to ban blatantly illegal posts and obvious spam. However the power to moderate is easily abused. Big Tech is now effectively punishing wrong-think, and it doesn't even have to happen on the platform in question.

I've had similar feelings before. But what if they've implemented the only moderation policy that anyone in the history of the internet has discovered to actually work well for fostering discussion among capable people about interesting topics, consistently? The ruleset has evolved slowly over time; they weren't imposed arbitrarily. Each rule was to counteract a specific class of problems plaguing the site. For example, remember the painful discussion about the Airbnb fiasco a few years ago? It stayed on the site for like two days because an angry mob upvoted it so much. (I was a part of it, before later realizing that the story was designed specifically to incite an angry mob.) Etc. There are reasons to penalize certain kinds of content, and sorting it out is a very difficult process. But at least they're trying.

I'm sillysaurus2 because I broke a rule on purpose on my old sillysaurus account. I asked if they'd unban my sillysaurus account, and they were nice enough to unban it. I stuck with sillysaurus2 rather than switching back -- no real reason, just felt like it -- but the point is that they're lenient and they understand that mistakes happen, on both sides of the fence. Contrast that with the policy of, say, /r/AskHistorians: I'm permabanned there for a stupid mistake, and their policy is to give no second chances under any circumstances. So at least these guys are reasonable, yeah?

Yes, it's worrisome that they hold the power of deciding which discussions are penalized. But wouldn't you rather they hold the power rather than some other group? They genuinely believe in merit and good intentions, rather than just using those things as a smoke screen for greed, as some other groups do.

It's not an ideal situation, but it's like democracy: it's the best anyone's thought of so far. But these are just my thoughts, and I'm really interested in hearing yours.


Not at all, we want to have a unified discourse, but we have vastly different values and therefore moderation preferences. The only solution to this is making moderation user-based, but with better UX than just block everyone you don't like. The user initiatives formed 'block lists' on twitter, which are kind of a good idea for this, you have decentralized decisions that get aggregated into a single filter which then the user simply applies if he chooses so.

It is a very public example of the result of limited moderation.

Free speech advocates advocating for absolutism in free speech need a counterargument if they're going to go down that road.

In my experience, aggressive moderation, whether by the community or by admins, is the only way to keep a public community from turning into a cesspool, so if you're arguing for no moderation you better have a solution for what that actually entails.


I just wish things would be enforced more strictly. If you look at the comments being punished, a lot of clearly political statements get ignored while others don't - even if they just correct wrong statements. It's the tiring and commonplace way of "ban everything, enforce selectively". Same with r/cpp r/programming unfortunately.

Right. If I’m friends with a bunch of bigots and I delegate my moderation function to them, I would expect the system to work as you described. This is community moderation.

Or are you suggesting that an optimal social network should jam both gay marriage and beheading videos down my throat?

I’m honestly curious what you’re getting at here. Would you mind making a much more substantive post?


The fact that a discussion topic has centralized moderation is weird to me. Why should a group of biased imperfect individuals control a topic like r/news? Seems like a ripe area to develop automated moderation using the distinct preferences of each user.

I know the point of the article. I believe the current semi-centralized moderation system is vastly preferable to what they propose, because I feel making it easier to block swathes of users and homeservers would have a very negative effect on free discussion.

We've already seen what happens with this style of moderation through things like Twitter blacklists, and these changes would basically integrate such things into the protocol.

If blacklists are commonly accepted and used, but the people maintaining them are biased or incompetent, there stands a chance that your 'own server with whatever rules you want' will not be able to communicate with the vast majority of people. You can see this happening with email now, where large providers like Gmail broadly block smaller ones.


Whoa buddy, that's some pretty heavy socialism. Anyone can make a forum. Mandating moderation rules creates a restrictive barrier for entry and actually harms free speech that falls out of acceptable range

I'm afraid that Bluesky's approach to moderation is one of the worst possible ways to do it. When the internet was first starting to be mass adopted in the 90s, there was optimism that having access to such a huge variety of viewpoints would expand people's horizons. Instead, what seems to have happened is that people could find the group of people that shared their exact viewpoint, and ignore every other viewpoint. Bluesky's "choose your own moderators" approach gives a lot of power to the mods to set those viewpoints and block out anything else. I'm not sure what the solution is to getting people's viewpoints widened again (it's certainly not Twitter or YouTube's approach of throwing controversial content at you and letting you find the people in the comments that explains why your side of the controversy is right and everyone else's is wrong), but this isn't it. Ultimately I see Bluesky's technical aspects as mostly social signalling to the appropriate groups of Twitter power users rather than a fundamental shift in social media.
next

Legal | privacy