Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

I just wish things would be enforced more strictly. If you look at the comments being punished, a lot of clearly political statements get ignored while others don't - even if they just correct wrong statements. It's the tiring and commonplace way of "ban everything, enforce selectively". Same with r/cpp r/programming unfortunately.


sort by: page size:

Is it possible to be "neutral" and yet keep sensible limits for what is allowed and what is not?

Enforcing the rules is not straightforward and different parties tend to view things differently. Somebody sees hate speech, someone else valid criticism.


I acknowledge the link you cited and would like the ban for confirmation. The rest of what you said sounds nice but in practice seems mostly geared around practicality since otherwise you'd have to make it harder for new users to create accounts. Over the last couple years political posts and comments have only increased which violates other guidelines but is tolerated presumably because of manpower costs and because the point system ensures overly inflammatory viewpoints are eventually quashed regardless of their content.

That's great but I'm not talking about subreddits banning topics, I'm talking about the enforcement of acceptable views on topics that are already being discussed.

I agree with having safeguards against things like hate-speech, I disagree with how conveniently and broadly that's defined by certain people in order to push their social and political agendas.


Flagging political threads isn't stopping that either

True. It doesn't stop it. I think it does limit it, to some extent.

a more proactive approach would be to better curate the comments feed.

Interesting. Do you think the community would bear more aggressive curation?


I lurk and occasionally interact here, but this made me kinda concerned so I figured I'd reply. Not sure if this is the appropriate place for feedback (If it isn't, sorry in advance!) But just my 2¢: I personally don't think this is a good idea, both in principle and because of the practical consequences. Politics and software development/hacker culture/startups frequently overlap (user privacy, patents, open data/governments, just to name a few examples) plus the moderation team will be forced to come up with an official definition of what is politics and what isn't for the sake of enforcing rule. Since this is clearly subjective, users will start arguing over the definition, and every time a thread is on a "gray area" and get's flagged/doesn't get flagged people will get upset and fight over that instead.

Wouldn't it be much better to simply introduce a flagging system for "toxic" discussions (since clearly they can and do start from other, more mundane subjects, e.g vim/emacs) Then whoever doesn't like them can simply filter them out?

I personally try to be as non-toxic as possible in my remarks, since I feel like that prompts way more useful discussion, but I don't mind/don't care if discussions become toxic (since the original news might still be relevant/interesting, and it's really hard for the comments to be 100% toxic), and I believe at least some people are like me in that regard. I also understand other people might think different too, so a filter could be a nice compromise?


It isn't to the people who get actions taken against their commentary that they deem restrictive, though. And while the average person will simply take the moderation action with a grain of salt, I've been in a situation before where I've had to recommend en-masse banning of people who originally had constructive comments that I largely agreed with.

Regulating political speech for whatever reason (including offense), should be considered a legitimate exercise of moderation. It happens on Hacker News all the time - political speech is only considered on topic if and when it contains evidence of a "new or interesting phenomenon," and even then, users will mod down anything they don't like. And given the quality of most political threads here, I don't think many people here would prefer it if that were allowed to run amok.

If I'm running a car forum, I should be able to ban pro-Nazi rhetoric if I want, regardless of whether I find it offensive or merely technically off topic. Not every space on the internet needs to be /pol/.


That's the biggest issue. I was following a certain community recently that had a fairly elaborate code of conduct with very precise language and somehow always managed to manipulate said language so that one of their members could virtually get away with murder, while other people would get moderated/banned/whatever for the slightest mistake.

As long as its impartial (to the extent humans can be impartial...), there's no real problem.


I think people that work on this feature mean well - or at least they think that they mean well. But as a result, we have a two-tier system where the peasants have one set of rules and the nobility has an entirely different one. It may have started as a hack to correct the obvious inadequacies of the moderation system, but it grew into something much more sinister and alien to the spirit of free speech, and is ripe for capture by ideologically driven partisans (which, in my opinion, has already happened). And once it did, the care that people implementing and maintaining the unjust system have for it isn't exactly a consolation for anybody who encounters it.

The moderators of major subreddits are extremely authoritarian. There must be a middle ground between free-for-all and thought police.

Gotcha. If they're breaking the site-wide rules, then I guess the ban is warranted. My main concern is the trend of selectively enforcing rules just on those you disagree with, and being lenient with those you agree. That could be seen as a form of “soft” censorship

I see where you’re trying to go with that, but there is a huge gap between “Here are some suggested guidelines for submissions and comments, please try to follow them.”, and “Under no circumstances can you say anything that contradicts the policies of this particular political organization we agree with, repeated violations will result in de-platforming.”

I find the "both extremes are mad at me so I must be doing something right" defense not really useful, as it really doesn't say much about how evenly you apply moderation. You could be a hair's width away from being a fascist, and some full on fascist would complain that you're too left wing, and both a centrist and a leftist would complain that you're too right wing.

I don't want to complain about people doing it (but I have commented on it, and taken the punishment), I'd just prefer the moderation to be "either there's none or we keep our hands off".

I understand that it's not an easy position to be in, as you have to keep people happy, and enforcing the rules sometimes doesn't vibe with everyone, but I do believe that Facebook got that right (one of the few things!): if you say that you can't make sexist comments, you also can't make sexist comments about men. Twitter and Reddit got that wrong, and I believe you've got it wrong as well, as you'll ban someone like the person here, but you wouldn't bane someone who is the opposite.


Open discussion, democracy and transparency is certainly the goal of the internet at large, but individual sites have different incentives. The moderator has curating as one of their responsibilities and they probably have a general sense of the kind of discussion they'd like to foster.

Moderators and site-owners should take all measures at their disposal so that they can shape the overall experience of the site they envision. In the end, whatever mix of authoritarianism and democracy achieves that is fine.

That being said, hellbanning seems cruel. I think that all punishment should be explicit because the trick is to cultivate reasonable users rather than to pick and choose. I'd prefer some sort of comment rate adjustment as punishment (1 per day, 1 per hour, 1 per quarter hour, etc). As from personal experience, most obnoxious comments derive from a mixture of caffeine and passion.


i don't think it's possible without heavy moderation, even more so than on non-political forums

Okay, to make it clear - I don't think sites should lose 230 protections for having a moderation policy. I do think that the excessive manipulation and censorship of dissenting opinions on sites such as reddit is a bad thing however.

You sound like you want them to do more moderation?

It amazes me that we've reached the point where there are commenters on Hacker News arguing in favour of online censorship.


I think a lot of people find it comforting. Internet forums admittedly attract bullies in the form of trolls, astroturfers and people who are just plain mean. I agree in principle with a moderation system, but to have a secret moderation system, with no transparency and no recourse is susceptible to corruption and abuse, like any secret system is.

At any rate, those who find it comforting that 'big brother' is keeping the comments safe for them will stay around, and those with dissenting views will be banned to oblivion and eventually leave, making the community a small selection of people who both agree with the philosophy, and find it uncomfortable to deal with views that challenge their philosophy.

This is very similar to what happened in Academia in the past 30 years, and succeeded in pushing out all of the interesting, eccentric and creative people, leaving only those who knew the party line, and moderated their speech accordingly. I think this is not healthy for creativity, but then, that's just my opinion, and obviously the forum owner/moderators disagree.


That is where the problems have risen, by allowing politics in discussions. There's a very good reason politics and religion were historically banned in forums and groups. It appears we need to learn the lesson again.

Also, moderators are people and they're not perfect. Unfortunately allowing political discussion in also brings out political bias in the mods and it gets out of hand.

Twitter on the other hand pretends we don't need moderation, but then arbitrarily bans people because something. There's no clear guidelines, no consistent behaviour and mod-by-ML is useless in the end. It's a perfect example of 'free speech' implemented by someone who has no real idea what that means.

next

Legal | privacy