The problem with topics bans is that you can sabotage topics you don't like to silence them. E.g. Rust evangelicals could have been silenced quite easely by flame baiting their posts.
Banning topics frequently cuts losses on trends that are dying anyway. Nobody wants to see ghost town subreddits. It's good for the websites health to keep the place looking alive and clean.
Separating the effectiveness of bans from these trendy moments is difficult. Applying a ban on political discussions around 2015-2016 would have likely maintained toxicity.
Sometimes. As is often the case, there is evidence for both (banning/limiting, Chilling Effect, power pressure & scolding, etc), depending on the community.
In your second link, the topics were literally banned from the platform, rendering the question moot in that case.
The flaw with that is moderators and companies have to make a determination for speech they disagree with. Not all sites ban people in the same way for violent threats, and similarly all sites view inflammatory or derogatory speech in different ways.
At some point in the equation there will be a value judgment made in terms of what breaks the rules. People banned by that moment will cite censorship and demand to be heard (see: the various subreddits banned by Reddit) while people wanting that content removed will celebrate. Making the argument into one purely about censorship ends up removing the nuance and reasoning for why someone was banned, which is why when people talk about conservative voices being banned by twitter, they often ignore the damage and harm Alex Jones for example has been responsible for to many families involved in school shootings or the various conspiracies he peddles.
Right, so it's a bit more coarse then banning or allowing users, but if you selectively ban or allow topics that certain kinds of users want to talk about (or don't want talked about), you can definitely shape your user base.
Of course, having a topic is all good. But what some channel owners did is to prevent people from talking on the channel via bans, tried to block discussion and forcefully make people move over. That, in my book, is not ok.
The censorship isn't indiscriminate, it actually discriminates against and suppresses users who trend towards flamebait. Without careful moderation, this site would quickly turn into a worse version of Reddit.
When one slightly orange tinted guy can effectively ban discussion of any topic on all platforms by vaguely mentioning it, the problem is not that guy, it is everybody else.
That's great but I'm not talking about subreddits banning topics, I'm talking about the enforcement of acceptable views on topics that are already being discussed.
I agree with having safeguards against things like hate-speech, I disagree with how conveniently and broadly that's defined by certain people in order to push their social and political agendas.
I think the alternative is not a hellscape. The alternative is a clear, specific, transparent, rule-based moderation. In particular, I think that it's best to control the "temperature" of the discussion (even if subjectively), but keep the topics of the discussion completely open. Why would people not be allowed to discuss certain things, if these things gain enough traction? Instead, you could just control the temperature: banning the usage of bad words, personal attacks, etc...
There's nothing wrong with a sub banning/censoring people. That's a critical part of having a trust value within a social network and if moderators of a popular sub don't think a poster is meeting the bar to post it is good for them to ban it and become higher trust. If they go too far than there won't be content on the sub at all though.
For instance /r/AskHistorians has a very high bar for posting and threads are commonly removed. As a result (and because they still have posters that meet that bar), the sub is considered very high trust. Some other subs anybody can say anything and I go so far as to filter the entire sub out so I never see it again.
I agree that a ban on the posts isn't the right way. I agree moderation is the right way forward but we need much more active anti flamebait and anti culture war moderation.
Hmmm, this is an interesting take. I may apply it to one of my communities. I've been playing with the idea of banning politics entirely, but that's what most people want to talk about.
Did you notice any improvement on the level of discussion? Or do they just reacted different with each other posts?
Also, do you just censor the keyword or remove the post entirely?
It should be noted that this is an explicit strategy by proponents of the status quo, to make any discussion of the topic so toxic that the topic itself becomes banned.
It's possible to moderate a discussion forum to avoid this, but it requires a high level of involvement from the moderators and a willingness to aggressively recognize and prune divisive commentary regardless of the specific tone or content.
The problem with mass banning on a large scale driven by whoever is loudest is that it very, very quickly devolves into witch hunts. Nobody wins.
I used to run my own BBS back in the dialup days. Even then, the problems would be recognisable to a modern forum mod.
The only workable solutions I've found are that the more focused a board on a select number of topics, the easier it is to effectively moderate in a humane manner, and that it is impossible to rules-lawyer your way to success. In the end, you have to spend significant brain-time evaluating behavior on a case-by-base basis, and be willing to ban, listen, and revise in equal measures. It's hard work, and generally unappreciated.
Don't take me too literally. I'm not saying Dan keeps a list of banned topics, and checks posts one by one before approval.
However, there are groups who monitor this site for keywords, and flag posts that the community would otherwise be interested in discussing. They have a number of ways to do this, ranging from subtle to heavy handed. It's not hard to accomplish, technically speaking. You've seen what LLMs can do; and those tools have been available for years to some people. The benefits of doing so far outweigh the negligible risk for the groups involved.
The meta discussion around those topics is closed. Even posts asking for a discussion around them get removed, despite their immediate relevance to the tech and entrepreneur fields.
A lot of people only seem to be considering this in the context of individual bans. The real concern is the possibility of blanket bans on everyone discussing a specific topic or view. Right now, the words "ceasefire" and "genocide" come to mind.
The crux of the problem is that limiting moderation only to allowing content which is illegal and banning what is illegal still allows content many people may find objectionable, spam, trolling, racist and hateful speech and off topic content.
Not every online community wants to be a 4chan or Voat, which is what that sort of hands off moderation inevitably leads to.
There are valid reasons for limiting which topics people can discuss in a space, in my opinion. If I disallow people from discussing politics on my cooking forum that's not censorship. It's keeping the discussion focused on the actual topic.
HN seems to take a middle ground position of allowing politics when they are relevant to the topic, but not otherwise. That is also what I promote on all platforms I moderate.
Opinions like what? The only opinions people get banned for are ones which seem openly racist, homophobic, etc. Nobody gets banned for having or encouraging alternative opinions about, like, which Java framework is the best. Not even for more controversial arguments like whether God exists (at least as far as I’m aware.) Therefore the only ‘rational debate’ that will take place is between what most people would call racists, homophobes, etc. Nobody else is going to want to post in or even look at such an environment, so it rapidly turns into a completely worthless bigoted echo chamber where everyone’s patting themselves on the back for having such enlightened alternative opinions.
reply