Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login
Reddit's top user leaves platform after harassment (www.dailydot.com) similar stories update story
291.0 points by admiralspoo | karma 1757 | avg karma 7.57 2020-05-21 14:01:14+00:00 | hide | past | favorite | 482 comments



view as:

I was outraged when I first saw that list and how a few mods control everything, but this totally sheds some new light on the situation. The internet can be really shitty sometimes.

Vaguely reminiscent of MrBabyMan and Digg.

Relationship Advice lead-ish mod here. We were one of the subs "called out" as being controlled by Gallowboob with the original post and subsequent reposts.

We (buu700 and I) added Gallowboob some years back largely to have insight into how other teams modded their subs at a time when we were growing quickly and needed to know what to do to keep up. While useful in meeting that goal, we quickly realized it was pretty meaningless for /r/relationship_advice because of the nature of the questions we were getting. The only sub like ours is /r/relationships, and we each differentiated from each other by having different rules and content creation controls.

As best as we know, Gallowboob enjoys the place and is pretty decent at modding, so we're pretty happy to have him. Most mods burn out quickly because of how dark the questions get as well as because of how meaninglessly violent people become when their posts are modded.

---

Separate comment on an item in the story:

> Allam believes his time on the site has made him a more “paranoid” person and led him to develop “borderline PTSD.”

Moderating Reddit's larger subreddits is absolutely capable of resulting in PTSD-like symptoms. I've been dealing with some on and off after a post some years ago where somebody who requested advice followed through with the best course of action only to find that his wife killed their kids soon after. And Reddit has absolutely no support system for things like this.


And Reddit has absolutely no support system for things like this.

My immediate thinking involves referring affected parties to professionals and specialists who deal with this sort of thing, but in your opinion-what should Reddit (the company) be doing?


If anything Reddit (the company) should be working to limit the influence of a single "power" user like this particular moderator, not make it easier for him to control the site.

That's fair, I was more speaking to the question of moderators affected by PTSD. I don't necessarily disagree with what you've said here, but it doesn't quite answer my question, or are you suggesting Reddit Admins limiting mods ability to be mods is a solution to this particular problem?

I guess I'm having some trouble unpacking your suggestion, can you help me gain some clarity on what you're saying?


paying for the councelling of the moderators.

> Most mods burn out quickly because of how dark the questions get

Care to give us some examples?



You don't know if that's what he's referring to.

In fact, I expected that the best examples got modded out.


The "?b?e?s?t?"? most representative examples often involve egregious cases of abuse against parties either without consent or without a capacity to consent.

>how meaninglessly violent people become when their posts are modded

Maybe if mods were more transparent across all the subs.

Most people get angry when someone removes their post and yet they see it reposted and approved hours or even minutes later (Gallowboob is infamous for that but regular users do it too)


> Maybe if mods were more transparent across all the subs.

Don't get me started.

Particularly "muting" posts with auto-moderator (silently hiding them for others without notification/warning/explanation). It was originally created for spam control but is regularly abused for generic moderation. It needs more controls placed on it.

To give a recent example, I wrote a long reply in /r/fitness's daily discussion but it was muted because it contained an offhand remark about COVID-19 (vis-a-vis getting hold of fitness equipment right now). Why are they muting all comments that contain "COVID?" Who even knows, but the /way/ it was done was pretty irksome and resulted in wasted effort on my part for a comment that violated zero published rules or etiquette.

/r/politics has a huge dictionary of phases and idioms that result in auto muting. None of which are defined in their rules.


> /r/politics has a huge dictionary of phases and idioms that result in auto muting. None of which are defined in their rules.

This is true here on HN too. One such word that I’ve seen to cause comments get auto-killed is “m*sturbation” (censored for obvious reasons), and I am sure there are others.


Plenty of people complain about overmoderation here on HN as well... I am one of those people

I firmly believe that the only way an online forum, particularly an anonymous one with free signups can remain relatively civil is very heavy handed moderation. This is not a free speech zone (it's a privately owned space), and the only way to prevent bad actors is to be very liberal in applying heavy moderation.

If a few "medium" actors get banned by accident, that's the price to pay for the rest of us getting to enjoy discussing tech without dealing with a toxic cesspool.


I strongly disagree. Bad actors thrive just as well if not better in places with moderation and even sign up fees.

Metafilter is a forum which used to be very diverse in opinion and is now basically captured by a small vocal minority. How did they do this? The minority produced a large amount of content for the forum and was very active. 95% of that content was high quality and on topic but the remainder is biased and very opinionated. As a result of their interaction with the site they were closer to the mods, regarded more highly, given the benefit of the doubt in "bad actor" conversations.

Now today Metafilter is a dying community of territorial users who eviscerate anyone who doesn't know how to play the game. The minority won and created their little clubhouse corner of the internet. Metafilter as a forum though is a shell of its former self. There are fundraisers to keep the site alive and formerly paid mods and devs are now either retired or do it for free. Not only did it drive away old and new users but the minority also seems to have become bored without constant drama and moved on (as seen by new posts and commenting activity falling of a cliff).

I know the common refrain on HN and in general is "on a private platform there is no such thing as free speech" but be careful. Don't let blatantly toxic users run your platform but beware of users who are quick to call everything toxic.


Every community has its rise and fall, and there is no one solution for all of them, but I still think without heavy moderation that fall comes much quicker and more violently.

Yes, moderation might be necessary, but what the anecdote illustrates is that simply having moderation will not be enough.

So the Linus Torvalds quote about compiler masturbation is a no-go then.

(posting from a throwaway to test your theory)


https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...

Though they very likely use dictionaries to find comments more frequently associated with undesired discussions since that should be more effective.


Here's the list:

  masturb circlejerk faggot nigger feminazi shitstain lmgtfy.com #maga
...plus a bunch of spam sites. Why those words and not others? Shitstained if I know. These things tend to get added when somebody does something, and then stay there.

Maybe if mods were more transparent across all the subs.

This is a common refrain I've heard going back well before Reddit ever existed.

I helped moderate the Vault Network boards for a while back when they were a thing.

Its hard to overstate how amazingly...disturbed and vitriolic some of the individuals of a community can be. And dishonest. And spiteful.

No amount of transparency helps when all it takes to refute any evidence is to label the mods as lying or "corrupt".

Let's say a user says something bad and you mod the post and give them a warning. You tell them exactly what the offense was and why it was moderated. They might even act civil in response.

Then you see later them talk about "X mod totally censored their post for no reason and refuses to explain why. X and the rest of the mods are totally corrupt".

So, what do you do? Post a screenshot of the private messages exchanged (something, for instance, we weren't supposed to do)? Take a screenshot of a browser window with the "mod view" (aka. uncensored) of the original post? Something said user will point out can be easily edited because well, its a browser showing a website, not exactly hard to alter).

And that happens every day, all the time constantly. And no one is being paid and there is a constant stream of other stuff you are trying to stay on top of.

And sure, some of the mods are shitty and "corrupt" in a sense. But I would say its as much a reflection of the community itself as anything.


While I am sure that occurs, I have been personally banned from several subreddits not for rules violations, or violence comments, but because the moderator simply disagreed with my comment or I was harsh in my commentary against the product the subreddit was about, for example a certain web browser subreddit banned me because I talked badly about a policy of that web browser

I have much more experience in dealing with corrupt and biased mods than I see anything else


Just to provide the other side of this.

Imagine seeing a post exactly like yours, literally word for word, with the same civil tone and all.

Except the user in question had been banned for posting a tubgirl (its gross) image by a moderator that happened be female and had responded in private messages on a clear alt account (minutes after the ban with the same IP address) calling that moderator a slut who deserved to be raped and killed.

And that style of interaction being relatively common.


Imagine treating a civil poster as if they were a jerk, just because a different civil poster once turned out to be a jerk.

Its' not about treating people like a jerk, just that you learn quickly to never trust a person at face value when they say "I didn't do anything wrong" because that's usually the first and most common phrase a person who does something wrong uses.

I was merely pointing out that the "public" persona a user portrays doesn't have to match the truth.


Which can be equally applied to moderation staff, mods that say "We are not biased, and always are fair only banning people that break the rules" are equally as likely to be bad actors as your narrative about regular users

Definitely. I mean, in the vast majority of cases, the moderators are just "regular users" who are given (or chose in the case of the creation of subreddits) power over other users. Even if the moderators are employees they are still fallible, with personal motives, just like the people they moderate.

In the end, that's why its such a difficult problem. You take the normal conflict that occurs in communities, add in the potential for malicious actors on both sides and it's no surprise that the normal conflicts can spiral out of control. Especially in the virtual and relatively anonymous setting of online communities.

And from a person on the outside looking in, it can be impossible to actually know what the truth is.


Not sure how that invalidates the equally if not more common occurrence of banning based on philosophical, political or other disagreements with the moderation staff

That sounds like a job for public automated mod logs.

Naturally people will still complain since it's impossible to fix people, but I'd imagine having a "authoritative" list of every moderator action and the accompanying explanation would help stave off the corruption/lies accusations. That combined with a reputation of transparency.


That sounds like a job for public automated mod logs.

I mean, I think that would be an interesting experiment, as I can't think of a large community that provides such data.

But again, if you the underlying assumption is "the mods are corrupt", then that accusation can easily be transferred to whatever logs are provided as well.


The Something Awful forums, which have been around for a very long time and have been relatively successful in maintaining a decently sized community, have a public moderation log[0].

Something Awful also has a number of other moderation features that I think more sites should emulate:

1. Temporary bans from posting ("probation") of variable length, to allow for punishment short of a full ban. Usually 6 hours to a couple days, depending on the offense, occasionally a week or longer.

2. A nominal registration fee ($10, one time) to register an account, to cut down on bad actors just making new accounts.

3. Normal bans for being a dick are reversible by paying $10 (same cost as registering a new account), unless you get "permabanned" for either repeated bad behavior or posting illegal content. If you get permabanned, any new accounts you create get permabanned as well (assuming the admins can find them, which they do remarkably effectively using IP and I think payment info).

That last point sounds like it incentivizes the mods to ban users, so that the forums get more money. But it doesn't seem to actually have that effect, possibly because most of the mods are not paid.

There have also been a few interesting experiments in moderation that were less useful, but are definitely entertaining, such as the ability to limit an account to a single subforum (usually the one for posting deals, or one of the ones for shitposting). It's also possible to view a user's "rap sheet" of moderation actions from any of their posts.

[0] https://forums.somethingawful.com/banlist.php


As someone who used to moderate a pretty large gaming community back in the day I think there are two things to keep in mind.

The first is that transparency doesn't "fix" hostile community members. What it does is justify the actions of those in power to the rest of the community. Without this the community will very quickly lose faith and the situation just devolves into factions and hostility.

This is very indicative of your last line really. It's like any form of government. When the people lose faith in those in power due to perceived capricious or opaque behavior there tends to be a lot of civil disorder. This usually leads to those in power entrenching themselves and enacting even more draconian measures. It's a vicious circle. Perhaps all governance is subject to the laws of entropy and will eventually fail, but I believe transparency, consistency, and effective communication are the only methods to slow such an eventuality.

The second point is one of size. As the population of a community grows and the proportion of those who wield power shrinks you also end up with a lot of discontent. Many community members will no longer feel they can effectuate change as they're a small voice amongst many without any real connection to the small group in power.

Also note that my experiences are in relation to fairly tight knit communities. Reddit is a little different in the sense that plenty of subreddits are far less communal. Effective communication is very difficult when the community is mostly transitory right up to the point where mob mentality takes over.


Yeah, you definitely raise some good points here.

When the people lose faith in those in power due to perceived capricious or opaque behavior there tends to be a lot of civil disorder.

I would be really interested in figuring out how to combat "perceived opaque behavior" especially when the source of the complaints is more artificial, where the complaints are being used as a tool for manipulating the community rather than being based in an actual grievance.

That's the reality you run into sometimes, like that oft-used quote from the Batman movies "Some men just want to watch the world burn."


> how to combat "perceived opaque behavior"

As the GP said, transparency is for the punished is not sufficient. Governance must be transparent to the public, otherwise it will seed distrust. The behavior you described isn't "perceived" to be opaque, it is opaque.


Mods apparently inflict their PSTD from bad posters on good posters, given the common ban first ask questions later approach.

I really don’t care how shitty another poster was to you. I only care how shitty you are treating me.


I've been thinking of what I would do if I were a forum moderator and I came to the conclusion that I would have to implement a minutes keeping rule like they do in the actual government. In the UK for example you have the 30 year rule, which says that all the minutes in cabinet meetings get released after 30 years.

Maybe you could keep complete raw backups of the "minutes" (or mod logs in this case). So you would take a backup every 24 hours, encrypt it, and upload it to an independent write-only server (which proves you couldn't tamper with it). And after e.g. 1 year (let's reduce it from 30 years), you would release the encryption key to that mod log backup you made. This ensures transparency and trust.


> Maybe if mods were more transparent across all the subs.

We're (r/relationship_advice) rarely transparent with removal reasons. Our templatized removal reasons generally look something like this:

> u/[user], please message the mods:

> 1. to find out why this post was removed, and

> 2. prior to posting any updates.

> Thanks.

or

> User was banned for this [submission/comment].

The reason is because we have a high population of people who:

1. post too much information and expose themselves to doxxing risks, and

2. post fantasies that never happened.

So in order to protect the people who inadvertently post too much information, we tend to remove these posts using the same generic removal template. However, if people know that the post was pulled for one of these two reasons, the submitter may still end up on the receiving end of harassment as a result, meaning we have to fuzz the possibility of the removal being for one of these two reasons by much more broadly withholding removal reasons.

This is specific to r/relationship_advice. Other subreddits have other procedures.


>>how meaninglessly violent people become when their posts are modded

> Maybe if mods were more transparent across all the subs.

Does the latter really justify the former?

If you disagree with a moderation decision, take it up with them politely. If you consistently disagree, maybe this community just isn't for you!

Mods are just people donating their time. Even if they're inconsistent or "corrupt", there's no reason you should respond in any way that can be described as "violent".


> If you consistently disagree, maybe this community just isn't for you!

Ah the classic "don't let others make changes but outcast them" approach.


If you are in conflict with enough other members of a community, you are de facto outcast anyway. And if enough other members agree with you, just break off and form your own community!

We’re talking about online communities — low stakes to join, low stakes to leave. There are exactly zero reasons that anyone should resort to doxxing/threats/etc in response to moderation decisions.


What if you take it up with them politely and this is their response? https://imgur.com/a/jhmGXzJ

Find or create another community.

There are basically two scenarios here. One, a lot of people agree with you — in which case you should be able to appeal the decision or splinter off successfully. Two, most people disagree with you — in which case the mod is probably right, or at the very least you're simply not welcome in the community.

Let's also not forget that we're talking about violence in response to moderation decisions. So even if that's their response, it's still not okay to e.g. threaten them.


The problem with the first suggestion is the fact that the moderator in the above screenshot mods 1000+ subreddits. You can be pretty sure they mod the other subreddits too with the same mindset, how many of them do you recreate? This is the whole problem that has caused the drama in the past few days.

> Let's also not forget that we're talking about violence in response to moderation decisions. So even if that's their response, it's still not okay to e.g. threaten them.

Agreed.


As a mod - how much time do you commit to this? Are you just sitting there reading comments all day? I'm fascinated by this whole mods thing especially since it's volunteer (!!) work - which strikes me as insane.

As someone who was mentally and physically abused as a child, I'm so sick of people on the internet saying they have PTSD from mean comments that they can ignore and escape from.

You cannot escape abuse from someone that is supposed to be protecting you in childhood. You cannot escape from the trauma of war when you sign up or are conscripted to be a soldier. You cannot escape the horror of death being an EMT. Seeing actual horrific death, watching ones you love die, or having someone that is supposed to care for you destroy you cause trauma.

A stranger saying, "you're a piece of shit," is not fucking traumatic to anyone but an immature child or a weakling.


Unfortunately (or fortunately, as the case may be) PTSD and trauma in general are not yours to define or gatekeep.

This is a little more serious than "you're a piece of shit", no?

> I've been dealing with some on and off after a post some years ago where somebody who requested advice followed through with the best course of action only to find that his wife killed their kids soon after.


Calling someone with PTSD “weak” is exactly how society brushed aside soldiers and child abuse survivors with PTSD for decades.

> I'm so sick of people on the internet saying they have PTSD

> Calling someone with PTSD “weak”

That's a nonsequitor derail. The post you are responding to is about faux self-diagnosis.


> That's a nonsequitor derail. The post you are responding to is about faux self-diagnosis.

"weakling" is right there at the end and the GP is going on about a strawman not mentioned by the OP anyways, as several of the child comments point out.


> "weakling" is right there at the end Again, this is not related to the assertion.

Name calling (as a proxy for illegitimacy) is all the same for the intent of the point. Claiming damage without evidence is not compelling, even if it makes for tasty fluff content.


> A stranger saying, "you're a piece of shit," is not fucking traumatic to anyone but an immature child or a weakling.

Think more everything from graphic descriptions of rape to child porn. Reddit generally lets comments of the "piece of shit" nature stand.


Have an upvote.

We're conditioning a culture to be hypersensitive, and at the same time, for some of them, hyperaggressive. Lack of experience has always clouded the vision of humans, but many find themselves, willingly or not, living within silos and echo chambers, which reinforce their beliefs and behaviors. What they consider trauma does not begin to approach yours.

In other times, their vocal ignorant statements would be squashed immediately upon utterance. In these times, they receive reinforcement, from some.


I've got two thoughts after reading this. One is that trauma isn't a binary thing, and it can accumulate over time. Some of these mods are dealing with serious events, like the user who was given advice which lead to his wife killing their children. Others got doxxed and receive death threats.

Two, if someone feels upset from getting messages like "you're a piece of shit", I wouldn't say they're an immature child or a weakling. I'd say they could be sensitive, perhaps also very considerate, and in that moment they may worry that another human being is upset and they're responsible for it. They might hurt from the pressure to moderate something that's important to them. Maybe they struggle with social interaction, even if it's online, and experiences like this can be very hurtful.

I don't care why something hurts someone, I care more that they are hurt. Chronic exposure to these things, even if they do seem benign or minor from the outside, absolutely can lead to trauma.

Trauma is a result of exposure to acute or chronic damage to your physical or mental well being. This can occur in a staggering number of ways from person to person. How each of us handles that will vary, but if it leads to a lasting impact, it's trauma.

I'm sorry you experienced what you did. It's arguably worse than what the moderators are describing, but it isn't exclusive to those experiences. It's also not a contest.


Negative social judgements, death threats and attempts at doxxing and property damage is a heavy burden when you know (or believe) it to be coming from real people that you can't control and who are unpredictable. It is a reasonably similar tactic to torture. The increased perception of danger grows your amygdala and it doesn't ever shrink again, it's something you have to know how to deal with now. If you didn't before it can be traumatic.

Watching ones you love die is terrible but it can also bring about a sense of peace. Being actively predated on is no fun. I have also seen mental and physical abuse as a child, fwiw.


Why do you free work for a for-profit social network, especially if you say yourself that the work negatively impacts your mental health and that the community you are moderating is toxic?

What are you getting out of this?


It used to be that the mods ran little fiefdoms that they advertised things to buy in some subreddit's feed. Those micro business success stories seem far less common in the past few years. I don't know why you would anymore.

Because the continued existence and quality of the subreddit brings value to them. It’s no different than volunteering. The fact that Reddit is for-profit doesn’t really enter into the equation.

I don’t get it for relationship subreddits because it’s so mentally exhausting but for fandoms and niche communities the experience is super cool.


It’s no different than volunteering.

It's certainly volunteering in the synonymous sense of 'doing something for nothing because you can' but I feel this statement-left in the vacuous state of its own brevity-comes loaded with an unsaid implication that "volunteering" on Reddit is no different than "volunteering" in one's local community and I'm not sure it's that simple.


A lot of volunteer groups meet in for-profit places like cafes, and use for-profit ISPs and for-profit email providers or IM operators to talk to each other. An online forum is basically the same thing: a way to communicate.

Meeting together, talking about %thing%? Okay that's fair, I don't disagree that there are similarities.

It does feel just as limited of a definition of 'volunteering' if taking the grandparent comment at face value.


That community does genuinely help real people struggling with their relationships, regardless of whatever value it's existence also provides to reddit. I don't think the motivation is all that different from volunteering in one's community, both come from a place of wanting to help others.

i disagree. it is that simple. the community consists of the people that contribute to the discussion. the fact that this community is using reddit as their tool to communicate is secondary. if they as a group were unhappy with how things are done they could move. stackexchange, discord, whatever.

moderators should be volunteers from that community either way, not paid outsiders.


I don't think I've ever volunteered for a profit-maximizing private venture. Is that a thing people commonly do in real life?

I mean, I've done internships, but the quid-pro-quo there was quite clear.


(Serious question) what's the difference between an unpaid internship and volunteering?

An unpaid internship is done at a place where you expect to either get a job, or because the place and the work you do there is important to your subsequently getting a job or school entrance.

E.g., I did a quality improvement internship at a hospital to get a job in quality improvement there, once upon a time. It was "unpaid," but it was an audition for a job that I wanted.

As opposed to my time as a volunteer in a local free clinic, where no benefit accrued to me at all.


I would assume most mods don't put "reddit mod" on their resume, for one.

It's very different than volunteering - since most people don't volunteer at a FOR PROFIT company.

I've volunteered at all sorts of community centers, schools, etc... Never once at a for-profit business in town.


As far as the business of reddit is concerned it provides the tech to run communities on, reddit itself doesn't manage the communities themselves (with the exception of admins stepping in to remove illegal stuff I suppose).

In your analogy reddit is the property owner or landlord of the community center, but it does not operate the community center, so volunteers are necessary.


What are you getting out of this?

Not the OP, but most people just want to help a community that they are a part of and take a hand in making it better and helping members of the community.

You have to remember, this issue is played on pretty much every online community that has likely ever existed, in some form or another. Long before the term "social network" became used.


Judging from most moderators that are active it would appear to be a power thing for them.

Oh for sure, for some people that's definitely true. And honestly, the community itself self-selects for people for who that's true as well, since most reasonable people realize that being a moderator is actually kind of terrible in reality. So the people who last the longest tend to be the people who aren't just in it out of altruism (users work hard to drive that motive into the ground).

But at the end of the day, you need moderators. And the job sucks and is unpaid, so you can't exactly select only the "perfect" candidates.


This is absolutely true, which is why we tend to be pretty conservative with adding new rules - to reduce the risk of people getting carried away with the hammer. Some communities are the exact opposite - they'd rather people hammer away specifically because they're extremely selective with content, so people with a "power thing" are a better fit for those communities.

You're being downvoted, but GP directly lists one of his reasons for modding as "because Reddit will grow in influence"

Being the invisible hand that shapes the narrative of an entire community to one that you see fit is surely an alluring power. Especially if you think the powers of that influence will grow over time.

It's not crazy to assume that a portion of them are in it for the power. People love positions of minor authority. See: the fragmented history and (hilarious) mod power struggle that led to Seattle now having a bunch of differently managed subreddits.


Power tripping is very addictive.

> Why do you free work for a for-profit social network

They don't. They do the work for a community they're part of. The social network hosts that community, and tries to make a profit doing so.

I organise a meetup that's held in a pub (well, not at the moment). People coming to the meetup spend money on the pub's beer and pizza. Am i doing free work for a for-profit public house?


Yes you are. Social organizers, advertisers, trend setters, ect frequently get free product, kickbacks or some money for their 'work'. That's the work you have done.

It's not your intent to do work and it's probably not the pub owners desire to pay for meetup organizers, but regardless of your and the pub owner's intent, work has happened.


Is it not possible that the situation is a win-win for all parties and that all parties feel like they are well-compensated in terms of the value they've received, and that therefore no money actually needs to exchange hands?

You may as well ask why people volunteer their time at all.


Hey I'm not saying money needs to change hands. I'm just pointing out it's work, like in the physics sense. Something valuable has happened, both in the business sense and social sense, but obviously the priority for parent poster is the socializing.

Benefitting from socializing is not worth the reddit moderator role is my opinion, under the psychological load they operate in.


Both points are valid I think.

It's fair that both parties feel like they're benefiting, so no direct transaction needs to occur.

Comparing this to volunteering generally isn't the same. I volunteer with local registered charities because I believe in their mission, and the volunteer work I do is directly impacting and improving the lives of the people I'm working with.

Which may be the crux of the GP's question. Why volunteer in the community in a way that results in a for-profit organisation monetising you for their gain, instead of finding a non-profit in your community to be volunteering your time towards? The non-profit / charity in your community's goals are more likely to be aligned with your own, than the for-profit social network.


It’s a false dichotomy.

A moderator’s work primarily benefits, directly, the users, because it allows them to communicate safely. The platform only gets help indirectly.

Charities often have salaried or reimbursed employees; do you volunteer in order to get their salary paid? Of course not, you do it because it helps actual users; the fact that this allows people to get paid, somehow, is an indirect benefit. Same here, really.


There's a difference between a charity and a regular for-profit corporation.

If the charity started stalking me to sell my data onto advertisers in order to pay the salaried staff or other stakeholders more money, I'd stop doing any work through them.

The charity's mission is not maximising revenue / profits.

Any VC/PE backed corporation's mission is to maximise revenue / profits at any cost.


Ok, so in addition to the community getting value out of it, the pub benefitted too, that's just called win-win. Why does the pub have to lose in this situation for you to be ok with the result?

I am okay with the result. The guy asked if he was doing work, and I said yes he has done work. Why would I not be okay with that? The pub & friends situation is a win-win.

A reddit moderator's free time and a toxic community is a net negative for a moderator, payment may offset the health cost.


Individuals seem to get a lot of value out of interacting with the community, so it's not obvious to me that the moderator's sacrifice (which primarily comes from a small portion number of individuals) is unjustified.

They didn't really claim the inverse, just that any form of community organizing inside a private for-profit business profits the business, making it free work for them.

It's one thing to organize a bake sale in a public park, but when you choose one pub over other competitors, and they profit from it, you're doing work for the pub for free.

This is why experienced community managers works with businesses to get prices for the event discounted, or set up an agreement for profits to go towards the event.


I don't disagree that incidentally involving a business in a community's activities benefits ('does work for') the business. But that doesn't mean that the business absorbs all the value a priori and the community no longer gets any value.

I'm sorry I wasn't clear, this point is not directed at friendlybus, but to the general sentiment upthread (e.g. ramphastidae) that seems to assume that because reddit is benefiting from the existence of the community therefore the community cannot benefit from the existence of reddit. The underlying reason why is because individuals benefit from the community.


Nobody is saying that a for-profit business absorbs all the value. Is your premise that if the community gains value from an action, then that action is justified no matter who else profits from it?

My premise is that if a community and individuals in it benefit from organizing themselves, then it's reasonable for the business that the community chose to host their interactions to have some amount of profit.

Also I would question if reddit actually makes much profit from individual communities like r/relationshipadvice, especially compared to the pub scenario which I find generally acceptable.


A hotel conference room costs on the order of $50-300 an hour depending on location and group size and that's without the benefit of a bar so the host has to front the money for all the drinks. Most other available venues are either just as expensive or forbid alcohol, loud noise, and so on. A four hour party would cost at least $200 plus drinks before the hosts even collect any money - or if they do, that's another cost and more risk for everyone.

The option is to do "work for free" or to shell out hundreds of dollars.


AOL would use volunteers, called Community Leaders, to moderate chats/BBSes, etc. They even got perks, like free Internet. I volunteered as a Host Guide when I was 16. CLs eventually filed a complaint with the DoL claiming AOL violated the Fair Labor Standards Act by using non-employees to do unpaid work that could have been done by paid employees. In fact, it's still illegal even if the volunteer doesn't mind or doesn't want to be paid, so long as the company is for-profit. AOL paid out something like $15 million in a class action. [1]

The same will, and should, happen to Reddit. One of the largest sites on the Internet is making money hand over first while moderators end up paying the price for no real benefit.

1. https://en.m.wikipedia.org/wiki/AOL_Community_Leader_Program


In your favored outcome, the government intervenes in a way that

a) prevents people who want to be mods, from being mods

b) breaks the way the site works

What gives you the right to do that? Seriously, if you want to live in a nanny state, please, not in my back yard.

I do not want you to be my nanny.

Reddit's moderation problems are not so big that you are justified in putting a gun to other peoples' faces to force them to do it the way you want.


>What gives you the right to do that?

The Fair Labor Standards Act: https://webapps.dol.gov/elaws/whd/flsa/docs/volunteers.asp

Reddit's preference of exploiting unpaid labor, and it's failure to plan for the inevitable event where that exploited labor decides they want to be compensated for their work is entirely on Reddit Inc. A company valued at $3 billion should have understood the potential risk.

All Reddit mods who feel burned out or are otherwise struggling because of the work they've done as mods should file a complaint with the Dept. of Labor and request financial compensation for the work performed, and coverage of any medical treatment stemming from the results of moderating toxic communities.

https://dol.gov/agencies/whd/contact/complaints


This is just a forced wealth transfer to people who have managed to partially capture the government.

>People coming to the meetup spend money on the pub's beer and pizza. Am i doing free work for a for-profit public house?

Yes. Unequivocally, yes. If I were the owner of X thing and Y person was sending lots of people my way I would be more than happy to give Y person something in return. In your example I would offer free pizza and beer at the absolute minimum. You are selling yourself short.


Then the other group members will wonder why you get stuff free while they have to pay, or they might doubt why you chose that particular pub. If you really need a physical reward for doing something you enjoy it would make more sense to agree a deal with the pub, like "second drink on the house" for the whole group.

Selling friends or peers to an establishment for pizza and beer is not really how a trusted human relationship works.

You can likely get a kickback from the pub, but you can also negotiate a deal where everyone wins. E.g. discount or freebies for meetup members, or you can run a small contest every night with prize fulfilled by pub.

> What are you getting out of this?

For a nontrivial number of people, this community is the only place to turn. If we step away, do people incur harm as a result?

Most likely.

(in other words, it's almost a psychological obligation of sorts at this point)


this will certainly be unpopular but if they don't get any money out of it then most likely it's an ego / power thing

I moderate a sub because if found it useful, I want to make sure it thrives and it's my way of giving back to the community, even if I don't participate as much as some other redditors.

If Gallowboob and friends are just helpful innocent people, why were were the calling-out comments, dozens upon dozens of them, deleted, and the people posting them banned?

If you're helpfully, innocently, looking after dozens of top subs, and people mention that and wonder what's going on, you don't censor them, you have flipping AMA about it!


I looked at some of the deleted comments, and if I were a mod, I would delete them as well.

I looked at some of the deleted comments, and there was no reason except hubris to delete them.

> If Gallowboob and friends are just helpful innocent people, why were were the calling-out comments, dozens upon dozens of them, deleted, and the people posting them banned?

Because personal witchhunts are against Reddit's rules as a form of targeted harassment, and mods are de-modded by Reddit for not enforcing Reddit's rules -- or worse, subreddits that consistently see significant Anti-Evil Ops (effectively Reddit's on-payroll God-Mods) action may be quarantined.


I more or less quit Reddit when I realized what kinds of traits moderation tends to select for: https://jakeseliger.com/2015/03/16/the-moderator-problem-how.... Perhaps you're an exception, or the exception, and, based on Reddit's growth, it seems that my preferences are minority ones.

Why is that subreddit so similar to Jerry Springer? How did it become so trashy?

You shouldn't be seeking relationship advice from random strangers or a reality show in the first place. Given that no one is a verified counselor the sub is largely redundant at best, knowingly harmful at worse. It is rife with low-effort karma farming, and toxicity, among the defaults.

Largely true with two caveats.

1. We're trying to keep the subreddit as accessible as possible to people with nowhere else to turn. This is ??ard.

2. There's at least one verified crisis counselor who frequently comments - u/ebbie45. We've verified their credentials.


I stand corrected.

drama-farming.

We're still figuring out how to deal with it short of a figurative nuke.


Why should reddit offer you any support? You volunteered; no one forced you.

> Moderating Reddit's larger subreddits is absolutely capable of resulting in PTSD-like symptoms

There was a moderator of the gaming subs that killed himself fairly recently. He said largely the same thing that modding was not healthy - but he continued to do it.

Why do you think that is? I suspect it was because he had control over something and found that too appealing to let go of. That’s not really a soldier’s dilemma of duty and responsibility.

So that anecdote aside, I’ve worked with a special forces vet that actually has PTSD.

Respectfully, if you think moderating an online forum is any sort of analog even to be “PTSD-like” you are either mistaken or have a far more gruesome task than I think possible.


> Why do you think that is? I suspect it was because he had control over something and found that too appealing to let go of. That’s not really a soldier’s dilemma of duty and responsibility.

Speaking solely on behalf of myself: we see a notable volume of fantasy and fetish posts as well as legitimate pleas for help that veer into extremely disturbing territory. The result is a situation where mods may well find themselves feeling substantially troubled with extended exposure.

I'm not about to impose that on someone else, and as a result of inevitable scope creep from the sub gaining readers, we've now got to sustain an environment that people use as either a first- or last-resort option while at the same time turning away significant populations of people (a subset of followers from influencers such as https://twitter.com/redditships/) who appear to relish creating drama from people calling for help. Great example: https://twitter.com/eganist/status/1263534755045412870

When staying imposes a burden on myself but leaving heightens the risk that people may be harmed, it's a lose-lose, and the trauma arises from this.

I'd show you some of the stuff we've had to mod out, but it's too dark for Hackernews.


> but he continued to do it. Why do you think that is? I suspect it was because he had control over something and found that too appealing to let go of.

Why do people volunteer on anything online, like open source projects even? Sometimes people like a thing, want to keep it good, and feel that it's less likely to happen without them. Leaving is condemning the thing to possibly get worse and decay through less contribution, and interrupts their social connections formed through it and their established routine that gives the satisfaction of contributing. It's not something easily abandoned after investing years in.


So, I don't get it.

Why does it matter if a few volunteer mods manage a ton of subs?

Isn't it hard to do? So they have a hard job? And it's volunteer?

And .. what? I don't understand the outrage directed at mods I guess.

Note: I'm a light Reddit user. I follow a handful of subs. And I can't name a single mod of any of them. Maybe this is more meaningful for people who spend more time on the site and are more involved in their particular subs? What, they just bash heads with mods sometimes?


I assume it has been going on for too long, so their age matters. They apply the norms of a previous generation, while reddit's audience is consistently young. It was probably never a concern when reddit started, but things like mod-ship should have expiration dates.

The argument is that these mods are biased in their moderation, and because of how many subreddits they moderate, that bias is reflected across those subreddits.

As a moderator you have the power to control what people see - subscribers and visitors alike. To me, it's easy to see how moderators with an agenda can curate content to match their views. It also wouldn't surprise me if some moderators are compensated - under the radar - by influencers and marketers.

When I first started on Reddit like 10 years ago, it leaned pretty libertarian (pro Ron Paul, pro free speech, etc.). I noticed over time that it leaned farther and farther left, and I wondered what had happened to all the previous users. Then there came a conspiracy theory that a handful of users had gamed the moderation system to take control of the tone of the site overall - especially the one that's the topic of TFA. There seems to be more and more evidence that the conspiracy theorists were right all along.

I think they just got drowned out as it grew. Libertarianism is a very niche political position outside of the "opinionated hacker" crowd. I'd be very surprised if the actual gross metrics for /r/Libertarian have decreased rather than increased.

I think instead you would find that it has only become more partisan. Stratification of subs where people with opposing ideologies don't mix, and if they do they get hella down-voted.

I think it's more like a bunch more people from different backgrounds joined the site.

I started on Reddit about a decade ago also.

Early reddit was a lot more like current HN. You're right you'd see more pro Ron Paul and libertarian stuff. Then the site went mainstream.

Users on the internet are more liberal than the general pop. Add that to the fact that Reddit up/downvote system does a terrible job of allowing unpopular/controversial views to reach the top.

Those same Ron Paul fans are all in r/conservative or r/The_donald, or other conservative subs now. There are way more now than there ever were.

You don't really need a conspiracy theory when the very nature of the site and the internet explain what happened really effectively.


>Those same Ron Paul fans are all in r/conservative or r/The_donald, or other conservative subs now. There are way more now than there ever were.

Is this true? I hold quite a few libertarian beliefs and those conservative subs don't seem any more inviting than their left wing counterparts. It's so much about partisanship that it's off-putting to me.


The conservative subreddits have definitely experienced a tonal shift since Trump's candidacy. I would pop by occasionally for news/discussions and the change is extremely noticeable to even infrequent users like me.

Smaller political subs are probably your best bet. things like /r/moderatepolitics or /r/tuesday


You don't need a conspiracy theory to explain why the majority won a vote.

>It also wouldn't surprise me if some moderators are compensated - under the radar - by influencers and marketers.

In GallowBoob's case, he's admitted before that posting on reddit is his job. He used to work at Unilad, getting paid by his company to promote viral shit all over reddit. I think now he's moved on from Unilad but does the same stuff at other companies.

It's pretty egregious that reddit admins allow him free reign to use reddit as his own astroturfing platform. I say good riddance to him if he truly is "done", though I have my suspicions that it's just a ploy for him to make an alt and do the same stuff with it.


To add to that, he's been accused of deleting posts, then reposting the same links/content without attribution. If you're watching closely I imagine it's pretty easy to see if a post is starting to gain a little traction and just... do that.

He's also been accused of lots of other shitty and possibly illegal behavior. I will say that evidence of this may be disturbingly easy to forge since it's all digital and everyone can use Ctrl+Shift+I in their browser. However, one redditor did mirror a nude which GallowBoob had allegedly posted to minors. I can personally attest that the photo itself definitely exists, but the circumstances around who it was sent to, etc. is... cloudy at the very best.

(Incidentally, I went to look at his profile to check up, but it turns I've blocked him ages ago. Good riddance.)


Benefit of the doubt, I could see removing quality content by a PITA user and reposting to keep that content alive without feeding the user, but at that point the system seems broken enough to step back and re-access entirely.

You're far too generous. We're talking dog videos and such... and as adorable as dogs can be, there's not really any reason to keep that sort of content alive.

He was caught red-handed doing it so he could get the karma. He was called out by the orignal poster who was not just the original poster, but the creator. He proceeded to ban that user and anyone who commented in support of that user.

I feel zero sympathy for gallowboob whatsoever.


These positions are unpaid volunteer positions. Moderating more than a handful large subs is a full-time job.

There is a good chance that a proportion of these people willing to do an unpaid full time job that can direct the attention of a billion pairs of eyeballs have ulterior motives, or are indeed paid by third parties (companies, political parties, governments) to promote or censor certain news/memes/opinions.


It's not a good chance at this point, it's the height of naivety and ignorance to assume that this isn't actively occurring. There's absolutely no incentive for it not to occur and there's significant incentive for it to occur.

> I follow a handful of subs. And I can't name a single mod of any of them.

I unfollowed all of the default subs, they're just not interesting to me. I use reddit for a few technical subs, a few local subs, and a few off-the-beaten-path funny subs. None of them even made the list of top-however-many that showed how they all had a few mods in common.

All these critical articles seem to assume that people consume the default reddit front page, and seem to have little relevance to the platform as a platform.


These people don't just control the big subreddits. r/programming is controlled by spez for example. Not that he would have a problem with just changing the Reddit database manually anyways.

Why does it matter if a few volunteer mods manage a ton of subs?

The concern basically goes like this...

1) Mod/Subreddit does something a user doesn't like (literally anything, doesn't matter how reasonable or trivial or toxic the complaint is, that isn't important, what matters is that someone isn't happy).

2) They want to find evidence that the Mod/Subreddit in question is "bad" or "corrupt" in someway to support that grievence from 1.

3) Users points to mod list as evidence that a "corrupt" group of mods is actually in control of everything.

4) Now that the "corruption" has been found, the Mod/Subreddit involved in 1 has been "proven wrong" and now can be safely ridiculed and labeled as such.


Mods on Reddit aren't _just_ mods; they're more like administrators for the subreddits they moderate. Not only do they enforce the rules of each sub, they also have free rein to decide what those rules are in the first place, and there are no formal mechanisms in place for community oversight or recourse for moderator actions.

If the head mod of a sub about cats decides that henceforth only posts about dogs will be allowed, that's how it'll be from now on and there's nothing anyone can do about it short of everyone leaving the sub or direct intervention from Reddit employees (pretty rare).

This isn't usually an issue; for the most part moderators are benevolent dictators. But for large subs, especially those which are automatically recommended to new users by Reddit and those which deal with more serious real-world topics like politics, you can imagine how this dynamic might create a certain level of tension between moderators and the users of the subreddits they control.


> This isn't usually an issue; for the most part moderators are benevolent dictators.

Glad you used this term, because it is descriptive of the idealized leadership of a sub.


These people are powerful and get paid to further the agenda of their employers.

> Why does it matter if a few volunteer mods manage a ton of subs?

I asked this in the last two HN threads on this topic, and was downvoted but not answered. It seems like a lot of people intuitively feel like 'power mods' are bad, but not in a way they are willing to explicitly state.

I hate to say it, but I think this might be political. There are a lot of people who feel that Reddit is too left-wing and that conservative voices are drowned out there. This (and the same complaint about Twitter, FB, etc) comes up on HN from time to time.

It seems like they (meaning, people who already thought Reddit was biased against them) saw this list and took it as evidence of their position, without really thinking through the details that detract from the idea (like the fact that these subs are mostly non-political subs full of pictures, and the users in question are mostly pretty far down the modlist and not very active).


It's not that innocent. There are numerous examples of subs being taken over in order to push political agenda. Two notable ones are r/canada, which has been taken over by white supremacists who push alt-right ideology, and long ago r/atheism was taken over by convincing the admins to oust the top mod on a technicality. In the r/atheism case the new mods had their own ideas about what sort of content should be posted and banned and silenced anyone who dissented. The majority of the people were very upset and didn't agree with these changes, but after seeing people drop like flies with the bans people either got scared into submission or left.

A bunch of these famous mods are part of a private sub where they discuss their pet theories on how to manipulate people, and on what sort of content merits being allowed to rise. They are explicitly against what they deem "low value" even if "low value" comments and posts are popular. Of course, it's pretty easy to define low value as anything you don't like, and you start to see the problem.


the argument against any real oversight in how a subreddit is run or an appeal procedure for moderator decisions is that if you don't like the rules, you can find a similarly-themed subreddit to participate in. When all the subreddits are run by the same couple of people, if you disagree with the mods you have no recourse other than to leave the site.

ffs the place is already a big echo chamber and controversial opinions are downvoted to oblivion, do you really want on top of that to add a single guy to moderate all top subreddits? remember most people have political agenda and tons of bias they don't even realize

I'd say it suggests a very different kind of site from what some might have expected.

You may think it's a site where there are many thousands of diverse communities, all with their own standards, viewpoints, preferences, etc. But this sort of thing suggests that the whole site and all of its communities are much closer to being a playground for a cabal of a few dozen super-elite power users. All of the supposedly different standards just reflect slight variations on the same standards held by this elite community, and nothing really outside of their standards can ever get any traction.

In short, we thought the internet was the world's greatest exercise in democracy. Instead, it seems to have morphed into some sort of feudalism or something.


I downloaded Reddit Enhancement Suite and it has a feature that marks the cumulative amount of upvotes you've given a user when you see a post of theirs. It is absolutely eye-opening just how much content is posted by the same few dozens of usernames. GallowBoob is the most infamous but to focus on him would be a mistake. It's pretty clear to me that the vast majority of content on Reddit is heavily manipulated and artificial.

Or... maybe they just post good content?

You yourself said the extension marks the cumulative amount of upvotes you've given a user. So, unless you yourself are engaged in this conspiracy to inflate GallowBoob's upvotes, then all this is evidence of is that GallowBoob seems uniquely skilled at finding and posting viral content.


I should have been more clear. RES keeps track of both upvotes and downvotes. Even having a single upvote or downvote will change the color of the submitter's username and make it clear you've voted on their submissions before.

I tend to upvote news articles and downvote reposts. I have definitely noticed that these power users recycle a lot of content.


Shortly after GB's rise to fame he was offered a job at one those horrible viral licensing companies, which definitely suggests he's skilled at finding viral content.

He also is paid to post on reddit.

He also submits posts dozens of times, deleting them if they don't gain traction quickly.

People don't dislike gallowboob and others because the content is necessarily bad... it's just inorganic and lame. I want to see content from other regular users, not power posters working for firms.


He has also been repeatedly caught abusing his moderator status to gain more karma. This works by suppressing, or outright deleting, posts made by other users before they can gain traction, and then immediately reposting their content with his own account. And he is by far not the only power mod who is using this tactic.

Some people follow the big accounts directly, so their posts are guaranteed to get early traction then build off exponentially and make the front page all the time.

Right. No one complains when big Twitter accounts go viral more often than small accounts, because it's obvious why. So why the conspiracy-mongering about the same effect on Reddit? Is there any actual evidence that moderatorship confers an outsized likelihood of virality vs. a non-moderator with a similarly gigantic following / karma count?

I have a little over half a million karma, about 200k from posting links. I think it's incorrect to suggest that the "vast majority" of content is manipulated. You see the same names because they understand the game of karmawhoring. If you have one link which can fit in multiple subreddits and your goal is to get as many points as possible, you post it to all the relevant subreddits, repeat for every other link you have. It's a numbers game. More posts means it's more likely for one to stick, and you get better at it with time. You eventually pick up on what times of day are most effective or which subreddits get more consistent results, things consumers of the content wouldn't ever know, so they try to do the same and their post flops and they say that clearly it's manipulation.

There is some manipulation, especially if you consider mods who remove posts at their discretion, but it's far from a "vast majority."

I do have concerns related to some political subreddits, where there is evidence of bad actors attempting to influence the 2020 election. A subreddit (whose name escapes me) just had all of its moderators removed because it ended up being one person running three or four dozen alternate accounts.


> I think it's incorrect to suggest that the "vast majority" of content is manipulated. You see the same names because they understand the game of karmawhoring.

What is the distinction between content manipulation and the game of karmawhoring? They seem more or less the same to me. I think I would consider SEO a form of content manipulation, for example.


I'm distinguishing it between the "intended" way of using the site by making posts and then the community organically votes on it, and some unintended "manipulated" way where content is artificially voted upon or otherwise given an advantage over other posts.

And when someone posts OC, a photo, for example, to a small subreddit or at a low-trafficked time of day and a "karmawhore" screenshots the photo and posts it to several highly trafficked subreddits would you consider those upvotes organic or manipulated?

It's my understanding that this is a common tactic used by people who are "karma farming" on an account. I can certainly understand how the karmawhore feels the upvotes are organic but I can also see how the owner of the original content would very much consider those upvotes manipulated.

And when a karmawhore lifts content and uses a title that insinuates it's their content ("My Dog Spike Doing Funny Thing"), I think the fuzzy line between organic and manipulated feels a little less fuzzy and lot more manipulated. Not just for the content owner, but for the content consumers as well. I think there's some expectation, at least some of the time, that an "organic" OP will interact with commenters and provide more detail in response to questions and it can be disappointing when you come across an interesting post only to realize that isn't the case because OP is just looking to maximize eyeballs & upvotes.


The _votes_ aren't manipulated though. The content is exactly the same even if the person "stole" it. It's immoral and scummy, certainly, but I wouldn't consider it vote manipulation by my definition.

Depending on specifics it could also be outright copyright infringement. (Just reusing a link wouldn't be, but e.g. reuploading a video would be.)

I wouldn't call all of the votes on a stolen post called "My Dog Spike" organic though either. Users have been manipulated in some, perhaps small, way because they've upvoted a post they otherwise might not have. How many of those people would have upvoted the same post titled "Some Random Guy's Dog Spike"? I suspect fewer or karmawhores wouldn't frequently use that tactic.

> I wouldn't consider it vote manipulation by my definition.

I suppose that's kind of my point. Your definition and perspective come from your experience on reddit as someone who posts that way. But there are other people who don't play the game of karmawhoring who might see things differently.

It seems to me it's a very blurry line that is difficult to define and largely a matter of perspective. The people who have mastered the system feel it's a matter of merit while the rest feel its a matter a manipulation. Reality is probably somewhere in the middle.


Is the value of Reddit karma purely psychological? I know that you can sell accounts with high point values for significant sums of money, but I'm confused what the actual value is aside from the little dopamine rush you get when something is upvoted.

Purely psychological. The most I've gotten out of it is that I've been active in a community for people with 100k+ karma for over 6 years, now, so I have some digital social ties that I'm grateful for.

More entrepreneurial Redditors can use karma to signal that they know how to get posts to the front page of Reddit. Marketing companies might hire them to do promotions.

At least this is how it was years ago. Maybe marketing companies have this all automated anyway already and don't need people like that anymore.


I've never been approached about marketing once, but I am not prolific like other users. I imagine you have to be in the top 100 or 1000 users in order to get those sorts of offers. The closest I've experienced was when I organically made a post on the speedcubing subreddit asking for opinions a new cube by a fairly unknown company and I was contacted by an employee of the company thanking me for the post and offering to give me one for free.

Do not underestimate the value people will place on a little dopamine rush.

I have never used reddit, and just know that "karma" is a type of points system like here on HN. Could you explain why you personally are interested in accumulating karma by "karmawhoring"?

I ignore the points system in HN too, but I think it's main function is to ban bad actors. So I don't understand on the other hand why someone would accumulate points unless they mean to later spend them on "bad actions".


It's just a game to try and get a higher score. Like HN points, karma doesn't actually get you any special benefits.

Prettysure there is some functionality benefits tied to karma here on HN...

Beyond reaching the threshold to be able to downvote comments, there is no significant functionality that you unlock as your karma increases.

I believe certain site features are gated off to prevent manipulation such as up/downvoting until users reach X points

Karma is a social proof for being able to astroturf and control the narrative. And depending on what group you're a mod in, can provide a great deal of influence and control.

I'm not saying that everyone who seeks high karma scores is like this, but astroturfing PR/advertising firms will pay real hard cash for accounts with an established history of trust from the community. I've read many stories from people who have gotten private messages from such places with an offer to buy their account. Or perhaps, like gallowboob, you want to parlay it into an actual social media gig (UNILAD for him).

I do believe for most people though it's just a sort of fame-seeking thing, for whatever reddit 'fame' is worth.


Someone should start a competing service then. My recommendation would be to have a "maximum karma" limit, at which point the account (mod or not) and all of its activity are deleted, and the owner has to sign up again and work their way back up from zero.

The idea being that you'd punish those users who generate most of the content for your platform? How does that make sense?

Same way term limits make sense. Hit the ceiling, then you're done. Make room for someone else. Avoid stagnation.

You get paid in power, money, fame, access ect for being president. Providing content for free to make someone else money doesn't have the same appeal that could tolerate term limits.

If you can simply register another account and start over, how would this help avoid stagnation?

You'd lose any privileges associated with a high number of karma points, also all of your previous content would be removed, so people could cover the same territory in possibly new ways with less fear of being called redundant.

It basically compensates for first-mover advantage in forums.


If upvoting a post/comment (which gives the author karma points) would risk triggering its deletion (by making the author cross the karma threshold), then is that not effectively turning the karma system on its head?

Under such a system, if I think a post is valuable, I would then try to "protect" it and its author's account by downvoting it. Similarly, if I think a post is not valuable, I can help trigger its eventual deletion by upvoting it.


That would only really matter if the poster were really close to the threshold. Moreover the gaming of the system would be unlikely to work forever, requiring too much coordination of effort.

But to answer your question, yes, it is turning the karma system on its head for people near the top of the standings; the idea being that having "titans of karma" in the community becomes detrimental at some point.


As a thought experiment, your idea is interesting!

I see a few problems though. This wouldn't work for just any karma-based site. For example, on StackOverflow where there are good "canonical" answers to tech problems, and these get heavily upvoted (because they are useful), having them removed and later rephrased (possibly incorrectly) by a new user would be detrimental. So any system where "good" posts are archived and referenced wouldn't benefit from this.

Even when there are no canonical/permanent answers, the thought that every "good" post you make takes you nearer to account deletion is not exactly enticing. Why not simply get rid of points and make all posts ephemeral, kind of like Instagram stories? What good do points even do in a system where your account and posts will get inevitably deleted?

Also, wouldn't mediocre/bad accounts stick longer than "good" accounts, making the whole system worse? I don't mean terrible or troll accounts -- I understand the point system would still be used to ban or hide them -- but mediocre accounts which post mediocre/bad posts. Wouldn't they dominate in this kind of system?


People do that in games all the time. Finish the game -> restart

As long as the ride is good, people don't mind having to step down and queue again


I think deletion would be counterproductive, but maybe a karma limit after which you no longer accrue any points.

Say it's arbitrarily 1000 points (or whatever makes sense, as long as it's low enough it can be earned after a medium term of active contributing, but high enough very new users will have to learn the ropes of the community in order to reach it). After you reach this, people can tell you're a good contributor, but you can't earn any more points. Anything you do from that point cannot be done solely to earn points, removing that part of the "game".


Yes a lot of social media relies on cycling news and has the potential to slide in rumours that come out the 'other side' as fact. It's a flawed system, but it's worth watching the washing machine spin because a lot of great clean clothes come out from between all the suds and dirt.

I'm not sure if that's what it shows. That's the same pattern you see on all kinds of sites: most Wikipedia edits are done by a handful of users. There's a small core of ridiculously-active people answering questions on Stack Overflow. Of the millions of people who try streaming games, a few are wildly successful and account for a significant share of all watched streams. Reddit, with it's super-submitters, seems to fit right into that pattern.

Top subreddits are pretty much all like that. The smaller subs are really what makes reddit a great place. Often as the smaller subs grow bigger, I tend to find myself unsubbing from them.

It used to be like that. Now almost every sub is toxic.

Most of the discussion about any products such as coffee makers, bikes, etc are manipulated and placed there by the companies selling those products. The front page is also heavily manipulated, political posts often end up there artificially.

The user in question is GallowBoob.

He earned so many imaginary points on a platform, that other people target his status and content. It's not like he couldn't make new accounts. But hey, it's the platform's responsibility to support him for the content he posts and the moderation he performs? I cannot have sympathy for his own reindeer games.

Can someone please explain how doxing works at a technical level? Suppose I post something on Twitter or Youtube that someone does not like how would they dox me? For example, CNN outed a guy who did a video where Trump bashes a guy superimposed with CNN in a boxing ring. Wouldn't such attempts at doxing people also put pressure on their freedom of speech and expression?

It's very difficult to not leave breadcrumbs. Maybe your personal blog had your name, address, and phone number on it back in the 1990s and there's a record of that. Maybe you made one post six months back on your main account instead of your throwaway. Maybe you've inadvertently given enough details for someone to narrow it down.

One mistake is all that's needed.

For example, Mitt Romney's secret Twitter account was found from just a few bits of info. https://slate.com/news-and-politics/2019/10/mitt-romney-has-...

> “That’s kind of what he does,” Romney said with a shrug, and then got up to retrieve an iPad from his desk. He explained that he uses a secret Twitter account—“What do they call me, a lurker?”—to keep tabs on the political conversation. “I won’t give you the name of it,” he said, but “I’m following 668 people.” Swiping at his tablet, he recited some of the accounts he follows, including journalists, late-night comedians (“What’s his name, the big redhead from Boston?”), and athletes. Trump was not among them. “He tweets so much,” Romney said, comparing the president to one of his nieces who overshares on Instagram. “I love her, but it’s like, Ah, it’s too much.”


Reminds me of a favourite gwern post about how many bits of information there are that make you unique:

https://www.gwern.net/Death-Note-Anonymity

Even analysing what times of day you post can narrow you down a lot.


> Suppose I post something on Twitter or Youtube that someone does not like how would they dox me?

Basically it's all about linking internet presences to, eventually, real-world information. Lots of people use the same username across different sites. Suppose I get mad about a video you post on Youtube, so I google "hi41" and find your Twitter. I don't find anything personally identifiable on there, but then I reverse image-search a vacation picture you posted and I find your instagram, which is under your real name. Now I've got your real name, which is a powerful thing to drop on someone who thinks they're anonymous.


If you want to look for techniques, there are lots under the umbrella of OSINT. It stands for Open Source Intelligence, but it doesn't refer to open-source as in software. It just means the sources of information are public instead of being secret. Sometimes you can just Google the person's username, or check the WHOIS records on their website.

The title is misleading. He was not doxed. People have known who he is for a long time.

The “outed” is referring to the fact that someone compiled a list of the small number of moderators that moderate a large number of subreddits, and gallowboob was on the list.


There's not a whole lot of technicality to it imo. Say I take to google and start looking for your username. Will other websites turn up? Will one of them have a link to an email address of yours? Can I take that email address and find something else you've done? Perhaps you've used your real name and I can Google that to find info about you.

Maybe you've said something in comments that hints to an identity. Who is your employer? What is your line of work and the city you reside in?

Basically... how much effort does it take from me scouring your public profile before I can narrow down who you are with that info? People have built tools into reddit that does that automatically by pulling all the user comments then looking for info about the person they've shared.

The technical side that GallowBoob talks a bit about in the article is people engaging them in conversation and trying to get them to click a link to some site. If he was to click that link they'd have the IP he connected from and would probably start running hacker tools to try and find a vulnerability in his network.


It doesn't have to be that technical. People search for references that connect your username with a real name and possibly a location. If there is only one person with your name in your town, then your home address and likely phone number can be derived from there. No hacking skills required there, just researching.

If you send the target a link and they click on it and you control the server, you can find the IP address use to click on it and aren't using a VPN.

Home IP addresses don't change often, so with the IP address you have another bit of information that can searched on. This might lead to a second username associated with the IP address.

Sometimes simply paying attention to the content they post reveals a lot.

Once on a Reddit someone posted a "Guess what state this photo is from" post. It could have been from many places, but I checked out his profile and all of his posts from the same place in Pennsylvania, so I guessed that. He seemed amazed that anyone guessed so fast, but also unaware how the other content he posted made it very easy to fill in the blank.

Also, many people aren't trying that hard to hide, because they don't expect that anyone to put any effort into "outting" them.


> Also, many people aren't trying that hard to hide, because they don't expect that anyone to put any effort into "outting" them.

On top of that, people who are trying to hide now may have a decade or two of not trying to hide in their Internet history.


There are tons of examples of how 4chan routinely traced down individuals based on little evidence.

For more recent examples see bellingcat.com it's amazing how far you can go with just open sources and a lot of determination. It also helps to have multiple people do the searching, if like a 100 people would trawl through your posting history here they could probably create a pretty detailed profile of you.

For example, on my hackernews profile I link to my mastodon handle. If you would go through my mastodon posting history you would see that I occasionally post links to YouTube music videos I like, I also occasionally post comments under YouTube videos under my real name. If you took all videos I've ever posted to mastodon in like a 2-year period, and compared the comments you could probably find out what my real name is. It would take some hours of menial work, but it's pretty straightforward.


I wish TV show writers knew these things.

It would be educative and more entertaining than the hacking stuff they write.


Mr. Robot's writers knew this.

Such a great show.

The show Mr Robot kind of does this. While there is "hacking", they make a point to have the main character explain most of what people think of as "hacking" accounts is just social engineering.

I am a freshman moderator of /r/portland.

I had thought working as a moderator would be a great way to help grow the subreddit's base. Offer more useful real world interaction points and basically have a big impact on Portland.

Instead what I've found is that 99% of the time the moderators are dealing with bad actors and that has to be the primary focus to keep the sub from falling apart.

It isn't just dealing with trolls or stepping into stopping out of control discussions.

It is actively performing anti-ban-evasion against people who are targeting the sub for disruption and then going after moderators that get in the way of these attempts.

There is at least one active case number with the Portland Police Department of a person that has both attempted to doxx our moderators and has gone to the home of a moderator and vandalized property repeatedly.

In this example, the person will not stop and creates new accounts every day.

While there are things Reddit can be doing to help, (such as improving tools to counter ban-evasion,) I think this problem is bigger than Reddit and focuses on lack of enforcement for digital actions that would qualify as genuine crimes of harassment if translated into the physical realm.


Hacker News suffers from the same problem. We have folks like dang (God bless him) manually dealing with assholes. Seems like we could solve these problems with technology, but here we are two decades into the 21st century and we still have humans cleaning the pools.

I'd be interested to know if actual forum moderators think that we could solve the bad actor problem with algorithms...


As I was typing this, I was thinking about moderation of HN and but was cautious to comment on meta.

Speaking generally, without real-world consequences for violating community TOS on a service there are no teeth in tracking bad actors, banning them, etc.

For a long time, I loathed Facebook's real-name policy. However, to some extent I suspect the amount of identity validation and attribution of comments to actions does have a limiting impact on casual trolling and harassment on that service.


I would like to hear what LinkedIn moderators deal with. I wonder if it is easier/harder/same as Facebook or Reddit.

There might be some interesting data here supporting real-name policies. I don't say anything on LinkedIn because I'm scared anything I say can and will be used against me when job hunting.

Also: thanks for modding /r/portland. I love PDX. Just wish it was sunnier: it would be the perfect city.


Facebook's real name policy is a complete failure. And professional PR, "influencing", and astroturfing are endemic, especially on political groups.

It seems to be trivial for bad actors to hack/farm unused accounts and impersonate people almost at will. Meanwhile nation state-level and/or corporate bad actors have the resources to bypass or subvert the id validation and create fake identities and accounts at industrial scale.

I don't think real ID is a solution. Not even if it's linked to some kind of robust physical key - which is obviously going to have privacy implications anyway.

AI or automated searching for troll-like activity, most likely followed by manual oversight, is more likely to be successful.

But there's the lingering question of whether FB is really interested in pursuing this with any effectiveness or enthusiasm. Given FB's resources and its lack of success to date, the answer seems to be "no."


Don't think the real-name policy helps that much, to be honest.

Just in the last couple of days of me commenting on FB I've been called a "retard" (sorry for the word, but that's what the lady used to describe me) and a "troll" (presumably Russian), this latter description accompanied by said commenter saying how I only write down dumb stuff which shows that I don't think before writing (but not saying where and especially why he thought I was wrong in my statements).

Compared to all that commenting almost anonymously on HN seems like a breeze of fresh air. Of course in the many years I have been commenting in here there have been ups and downs (a couple of very down points for me were the reactions after the Boston marathon and the endless banana references after the Fukushima disaster), but other than that when one is told that he/she is wrong the person saying it usually comes with his/her own reasons which helps move the conversation forward.


This was my same thought. We should understand the size of the problem and whether technology can help. If tech can’t help moderating, maybe tech can help in other ways. For example, establishing an anonymous unique internet ID that allows you to be online: if you get three strikes, your ID is suspended and you can’t spend time online for the duration of the suspension. Of course the system should be built so that you can’t bypass the one person-one ID and so that it’s completely anonymous. Also, the ID should not be transferable.

I can't think of a world where I would trust such a system (or trust anyone to create/run such a system). Is this just for websites? Or does it encompass all use of the Internet?

At least as far as websites or individual platforms go, this is exactly what accounts, moderation, and "banning" already attempt to do. We've seen how difficult and expensive this is to scale when your site gets as big as Facebook or Reddit. I can't fathom a sufficiently "benevolent dictator" that I would trust to act as a gatekeeper for the entire Internet at large.


Perhaps a potential solution is POW based registration, registration is slow and CPU heavy. I don't think it's likely there is going to be a perfect solution to this and might be that it's better to slow down bad users and make it expensive to ban evade.

This will just end up promoting the interests of already wealthy bad actors, and fomenting user cartelification behind a screen of anonymity. See also: Bitcoin

That's a very interesting idea, though it wouldn't significantly affect people with vast resources such as botnet owners, corporations, and governments. It would certainly slow down the garden-variety troll, however the determined troll would just keep a core burning to create a steady stream of sock puppet accounts.

Thankfully the HN moderators are paid employees--I can't imagine doing that kind of work for free.

Dan has better things to be doing than this; he runs the entire site. It's not like individually dealing with jerks is meant as his whole job.

Hello, bad actor here.

I only use Hacker News and Reddit. And I always manage to get banned every few months, so I guess I'm a "bad actor".

Usually I start off OK. I can rank up 1000 points on HN or 20k karma on Reddit quickly because some of the discussions are interesting and non-inflammatory and I have interesting things to say.

But then some topics veer into domains that make me angry (hello politics), or I find a comment inappropriate or unfair, or mod behavior hypocritical. And I share that in a post that gets flagged or downvoted, and I get banned.

It is hard to detect because I think anyone has potential to become a bad actor in the eyes of a mod. There is no such thing as an unbiased mod. They will rub some people the wrong way with their comments or actions.

Some personality type's are just not compatible with what mods want to see in their well-tended gardens. I've never stalked or hounded anyone, and I've disagreed with dang's assessment of my posts once or twice, (FYI DanG is a paid mod, not a volunteer, which is probably why he's so calm about it, even though he get's riled too: read the New Yorker article about him) but this appears to be a problem techies cannot solve. Why do I say this? Because it's been going on since the 80's with Usenet. Almost 40 years of trolls, and tech hasn't solved it, but it has created some systems that have worked to keep the weeds out of the gardens. But gardens still need weeding.


> But then some topics veer into domains that make me angry (hello politics), or I find a comment inappropriate or unfair, or mod behavior hypocritical. And I share that in a post that gets flagged or downvoted, and I get banned.

Kurt Tucholsky gave great advice on how to write a letter to a government agency that is applicable to all potentially heated discussions:

* write letter.

* put letter in drawer.

* wait three days. don't look at letter in drawer, write a new letter and send that one.


Hey, that's a great idea!

Instead of getting warned or banned, a user is placed in a timeout box where all of their posts get a three day lag and remain editable, and when the user logs in, they must re-read their posts before proceeding to the site. Kind of an 'in-your-face' reminder not to be a troll.

It wouldn't stop the insane trolls, but maybe it would give the borderline trolls time to think.


I like this idea too. Also temporary bans are good for offering a cool-down period for users who generally are constructive but tend to lose their tempers from time to time.

This is absolutely true, and it's unfortunate that the current format of sites like Reddit and HN discourage this kind of behavior. The comments that get the most votes are the ones that have been present the longest, which incentivizes early and fast, "shoot from the hip" commenting so your comments get into the flow early where people can see and vote on them. In most large subreddits, it's basically not worth your time to comment on anything more than 12 hours old or so: no one but the original poster (and sometimes not even them!) will see or read your comments.

I'm not sure how best to fix that. Some subreddits like /r/scenesfromahat use a timed-release system that only displays vote scores on comments after a fixed period (12 or 24 hours, I forget which). That at least helps reduce the first-comment effect, but it still means that anything after the votes are revealed is basically not going to be seen.


> I'm not sure how best to fix that. Some subreddits like /r/scenesfromahat use a timed-release system that only displays vote scores on comments after a fixed period (12 or 24 hours, I forget which). That at least helps reduce the first-comment effect, but it still means that anything after the votes are revealed is basically not going to be seen.

Isn't that a standard subreddit setting? I don't think that actually affects the problem you were describing. I think it's only meant to help keep the vote tallys from influencing user behavior (e.g. bemoaning how many up/down votes some comment got, being extra motivated to karmawhore).

In my experience, quick-feedback scores seem to have a negative influence on people's behavior and emotional experience. IIRC, HN used to show comments scores, but stopped some years ago. I personally try to disable such "features" as much as possible.


> I'm not sure how best to fix that.

At a certain scale, I think threaded conversations are incredibly difficult to follow without a voting system to sort it out, and once you introduce a voting system you end up with voting system problems as you mentioned.

I honestly think that flat forums are more fit for purpose. Multiple concurrent conversations end up being a bit messy, but there was at least a reasonable chance that someone would reply to any given post, since everybody in the thread was on the same page.

Heck, you can even gamify engagement with a flat forum - one of those that I still frequent allows you to "react" to any given post. It doesn't actually do anything except have a number beside the post tick up, but people still use it.


> The comments that get the most votes are the ones that have been present the longest, which incentivizes early and fast, "shoot from the hip" commenting so your comments get into the flow early where people can see and vote on them.

Reddit's default "best" sorting algorithm is designed to mitigate that effect [1]. It does a good job of not biasing old comments in terms of sort order, but it doesn't help the related problem that older comments tend to accumulate the most replies so newer stuff can still get drowned out in terms of quantity of other content.

[1]: https://redditblog.com/2009/10/15/reddits-new-comment-sortin...


I guess one way to combat this is to just not worry so much about whether a comment gets lots of upvotes or whether or not it is even read. They’re fake internet points, don’t mean anything, and are not worth anything. If you feel you can contribute to the conversation, do so. Don’t worry about whether you’re commenting early enough or whether you are betting on the right “lucrative” thread. Your comment could be one out of a thousand—who cares? HN karma is not some kind of competition.

I've long argued that the downvote is terrible for civil discourse, I've had to tread carefully and be warned many times when I comment on it - including writing in-depth constructive criticism and not simply whining.

It's strange to me that especially on a technology forum discussing the negatives or pitfalls of certain mechanisms is being suppressed, censored - just like with politics, as politics is inherently intertwined with everything including non-action.


The upvote/downvote thing was way, way worse when vote counts were shown per-post. That went away years ago and I think the current system seems to work pretty well when viewed through a macro lens. (In the sense of "is this a good community overall?" and "would it likely be worse if the up/down voting were entirely removed?")

I'm on another forum that opted for an up/down vote system and showed counts per-post much like news.yc used to. It created so much drama and anxiety that people were openly antagonistic towards each other over what amounted to fake-internet-points.

The solution the admins came up with over there was "keep the counts shown publicly" but also "make public the specific votes that were up/down on any given post" (with an anonymization of all historical downvotes before the policy change, but not afterwards). Within days, the community adapted and it was a huge net benefit, IMO.


What benefits do downvotes provide over just an upvote sorting the best content to the top and ability to report a link or comment for some greater infraction than difference of understanding?

Downvoting for being blatantly inaccurate or blatantly anti-community seems a worthwhile community-sourced signal for the posts of bad actors (whether temporary or permanently bad actors). This should absolutely factor into the sorting and eventual hiding of sufficiently negative posts.

For people who think of themselves as good actors, I will reflect on posts that attract downvotes and try to figure out if I should have been more constructive on a given post. (I don't "care" about the score per-se, but I do care about the community that I'm part of and if people are telling me that I'm being an asshole on a given post, I should reflect on that, decide if I agree with them, and if I do, to change next time. (Often, I think my content was fine and someone just disagreed and used a downvote to express that. That's not how I use downvotes, but if they do, so be it...))


If you look at hacker news however, a lot of nonsense makes it to the front page. One word titles, clickbait, old links to wikipedia articles, submarine advertising, something no one has heard of announcing a new version, something few care about announcing they are shutting down, a point release of rust, a program that is only relevant because it was written in a certain language, pointless, trivial side projects that only seem relevant to people unaware of the the same idea being done before, etc.

I think a lot of that wouldn't make up such a giant part of the site if there were downvotes on stories.


I like the stackexchange approach, where upvotes count 10x for karma, because no matter how obviously wrong and indecent, we still care for the share of good answers, and more fine grained controls may take care of the rest (sometimes manually). This isn't perfect, 1:10 is an arbitrary proportion, and it might give a wrong impression of being succesfull, and take too long to register as a downward slope; but very importantly, this detaches voting on content that is still immediately visible from karma, that is immediately personal and maybe not that important to other users.

I had a funny incident, where I misunderstood a comment, thinking that "all your answers are negative" was very true, because I often disagree and try to be contrarian. SE has a policy that discourages discussion, so if users find it uncomfortable having their views challenged, when they really wanted an easy answer, howbeit simplistic, that may be lawful.

The situation is different when discoussion is the aim of the game. And it's a bit disingenious to say that SE doesn't have "discussion"; rather, they avoid controversy, heated arguments, or open ended debate.

Anyhow, regarding your comment that downvoting is terrible for civil discours, I agree inasmuch as it's often hard to tell what the downvote signalizes, as it might lump up many different opinions into one false agreement. Say, five different opinions compete, but only one takes on all of the answers and will accrue downvotes for the attacks, whereas the others get basicly ignored by competitors for being irrelevant, but yield feedback so that the vote is really one of popularity, as opposed to ...

Really says something about diplomacy. And I hate it so much.


The downvote/upvote count thing should be informative, and the display preferences based on user's choice. I find the HN practice of dimming downvoted comments downright Orwellian.

You can always click on a faded comment's timestamp to go to its page, where the comment should be in a readable font. I'd love to know what Orwell would say about requiring an extra click, but oh well.

> I'd love to know what Orwell would say about requiring an extra click

He would say probably something along these lines: "The chief danger to freedom of thought and speech at this moment is not the direct interference of … any official body. If publishers and editors exert themselves to keep certain topics out of print, it is not because they are frightened of prosecution but because they are frightened of public opinion."

Your "click to read" today is the "buried in page 12 in pt-6 font" of the past. Same train of thought.


Another "bad actor" here. I think that it's a combination of two phenomena:

* I have accounts not just here, but also at places like Lobsters and Something Awful. In those places, because accounts are rare and can be banned so easily, discourse is constantly trying to stay much more civil than here or Reddit.

* As a former community moderator, I don't respect moderation actions on sites where anonymous signup is allowed. You asked for hoi polloi to wander in off the street and give their opinions; you can't then wonder why discourse is trash. Here, it's even worse; the moderators are paid for their work, which lends a clear bias to every moderation action. Similar happenings on Reddit led directly to user protests and revolts, and it's amazing that the community tolerates paid moderation here.

The idea of the well-tended garden is a potent one. I have had to tolerate obviously toxic but helpful people before and it is always irritating to not ban them, despite knowing that they are good for the garden.


Banning is more like pruning weeds than pulling out weeds.

> I have had to tolerate obviously toxic but helpful people before

I understand where you are coming from here. I struggle with this. I think there is a legit theory for it, usually given in the context of how to reconcile shitty behavior of geniuses (Picasso comes to mind: legendary artist, shitty human.)

Even if toxic people have something good to say once in a while, do the ends justify the means if they stomp all over the roses in the process?

> You asked for hoi polloi to wander in off the street

The garden analogy is potent because where I live there is a huge rose garden that anyone can wander in off the street and visit. Some people come in and do stamp on the roses. And it sucks for everyone else. Which is why I can understand the desire to keep those people out.

However, shouldn't the gardeners KNOW that there are and always will be shitty humans?

I'm truly ambivalent on this one: I want to participate, but I lack impulse control, so I'm excluded. That's not fair. And if I was tending a garden, I'd want to keep the "me"s out.


> I want to participate, but I lack impulse control, so I'm excluded. That's not fair.

Yes, it is, because the problem is not the garden, it's you. You want to participate, but you don't have a basic skill (impulse control) that is required for participation. It's like saying you want to be a concert pianist, but you don't know how to play the piano, so you're excluded and that's not fair.


Stop making this personal.

> so you are excluded

from what, playing the piano? Do you maybe see a connection here to why somebody might not know "how to play the piano"?

Or in other words: A garden without "you" is not really a garden, except in theory, if the proverbial tree makes a sound when nobody can hear it fall. That's a slippery slope argument.

Many people may lack impulse control, but preemptive judgement can't weed them all out. That's one reason why it's "not fair". It's fair to those that have "impulse control", maybe, but it is perhaps unfair that they get to decide what that is, when a moderator might act out of impulse, or experience, all the same. It is however futile to assume that life were just not fair, because then "you" have already lost.

If entry is taken to afford the gate keeper, it is not an open garden anymore, open to the public. At least not if the submission requirements are arbitrary to an uncertain degree. Maybe it's the wrong approach to take that internet discussion is not important and impulse control therefore let down too easily. But then again, the impulse to post or visit at all might be the problem to begin with, as in this post.

Really, who's aspiring to become a concert pianist in this day and age? That's a weak rhyme, unless you meant to imply that the reddit moderator cabal were playing the readers like an instrument.


> Stop making this personal.

I didn't; the person I responded to did, by using the word "I". They were specifically talking about themselves.

> from what, playing the piano?

From being a concert pianist. Read what I actually wrote.

> It's fair to those that have "impulse control", maybe, but it is perhaps unfair that they get to decide what that is, when a moderator might act out of impulse, or experience, all the same.

My statement that impulse control is a basic requirement for participation applies just as much to moderators as to any other participants.

Who gets to decide what the forum rules and norms are is whoever owns the forum. That's as fair as it gets.

There are some forums where lack of impulse control isn't much of a problem, because nobody else on that forum has it either. So strictly speaking, I should have restricted my comments to forums where that is not the case. I don't think that makes much difference in practice for this discussion, since as far as I can tell the forums where lack of impulse control is the norm don't have moderation problems since they don't have moderation at all.

> who's aspiring to become a concert pianist in this day and age?

Googling "how to become a concert pianist" gets plenty of hits, so it looks like plenty of people are trying to help aspiring concert pianists. Perhaps they're all speaking to an audience of zero, but I doubt it.

> unless you meant to imply that the reddit moderator cabal were playing the readers like an instrument

You're going way off into left field here.


That is why I said I'm ambivalent to the previous comment's statement about benefits from toxic personalities.

I think your argument mixes up things you can control (skill) with things you cannot control (impulsivity), if the latter could be controlled it wouldn't be impulsive.

And I admit that is a big gray area. There's a continuum of toxicity online, and there are going to be some moderation rules that are subjective.

Unlike a pianist, I see the argument as more akin to web developers choosing not to implement alternate or semantic constructs which in turn excludes blind people. A visitor can't get better at not being blind. Of course, the analogy breaks down because blind people aren't adding noncritical discourse (aka what one mod may consider "flamebait"), but now we are back to subjectivity and affordance as to what is noncritical. We clearly know how to make the web accessible to blind people, but we don't have a universally clear way to make discourse available to people who sometimes suck at it.

However, I can create as many accounts as I want, so I got that going for me.


> I think your argument mixes up things you can control (skill) with things you cannot control (impulsivity), if the latter could be controlled it wouldn't be impulsive.

First, we're not talking about a binary distinction; things aren't either "can control" or "can't control". It's a continuum.

Second, if it's really true that you can't control your impulsive behavior, that still doesn't change the fact that that behavior will make it virtually impossible for other people to deal with you in certain contexts. It's still up to you to recognize the impact that your behavior has on others, and to make choices about what you can realistically do or not do--or about how much work you are willing to do or how much risk you are willing to take to be able to participate in certain activities (for example, if it turned out there was a drug that enabled you to control your impulsive behavior, would you take it in order to enable you to do something you wanted to do?).

> I see the argument as more akin to web developers choosing not to implement alternate or semantic constructs which in turn excludes blind people.

Ok, so what "alternate or semantic constructs" could the programmers of HN, for example, put into their code so it won't exclude people who can't control their impulsive behavior?

> we don't have a universally clear way to make discourse available to people who sometimes suck at it.

It's not that we don't have a "universally clear way" to do this. We don't have a way at all. "Sucking at discourse" is simply not something we know how to accommodate for. The only way we know of to deal with it is for the person who sucks at discourse to learn how to not suck at it.

Perhaps at some point we'll have an AI or something similar that can mediate such discussions so all parties can participate. But we don't have anything now.


Let's finish the analogy. Rose gardens and other community parks are usually community-funded; my local gardens are funded with taxes. There are not only paid moderators (police), but paid curators (gardeners and arborists) who deliberately build up and cultivate an appearance for the garden. Some of the more expensive gardens, like the local zoo, also have an entrance fee, because taxes alone would not fund the garden at its given size and occupancy.

There are communities like this; Something Awful is the first which comes to mind. These communities deliberately acknowledge that money is required to fund community spaces, and use the money to improve the space.

There are also extensions to the analogy. A local park has a bulletin board. Postings to this board are generally made by community consent; anything that any community member feels strongly enough about can be removed immediately. This is also how postings on telephone poles work. Sometimes a community will lock up their bulletin board after a wave of abusive listings. This is analogous to primitive message board moderation, as seen here on HN.

Are we here to advertise to each other, like on a bulletin board? Are we here to produce a great knowledge base, like in a garden? What should the shape of conversation be?


> I don't respect moderation actions on sites where anonymous signup is allowed.

We don't put barriers to signup because we want it to be easy for authors, experts, and people with firsthand knowledge of a situation to step into a thread. Those are some of the best comments HN receives. If you put up barriers to keep out hoi polloi, you end up keeping out the likes of Alan Kay and Peter Norvig too, and plenty of lesser known people who have made first-rate contributions.

Besides that, there are legitimate cases when throwaway accounts are needed in order for a person to post on a topic, often when they have first-hand knowledge of a situation as well. How do you allow that while keeping out trash?

Obviously, if there were a way to allow the above good stuff while keeping out trolls, toxic comments, etc., that'd be grand. But as long as there's a tradeoff, I'd rather have the long tail at both ends—I think the forum would be more mediocre and stale without it.

p.s. I'm puzzled by your comment about paid moderation. It seems to me that unpaid moderation would be more likely to be biased, since people are going to extract compensation for the work in some form or other. If it isn't money, it's probably going to be power or an ideological or personal agenda, or something else that manifests as bias. In any case I'd be curious to hear what sort of bias you think is showing up in mod actions on HN.


> (FYI DanG is a paid mod, not a volunteer, which is probably why he's so calm about it, even though he get's riled too: read the New Yorker article about him)

If anyone else is interested, here it is:

https://www.newyorker.com/news/letter-from-silicon-valley/th...

Some interesting stuff in there, though I would love to read more about the behind-the-scenes tools they use to search and keep track of everything.


I was banned from r/hotsauce because I posted a coupon-code for a brand I was interested in trying. They were trying to sell stock to help mitigate the downfall of the COVID-19. The mod said I was an advertising shill and needed to be purged... As if people posting their collections all day didn't count as advertising.

A lot of subreddits end up being snakes telling their tail how it's the best tail. A lot of those can turn into ads as people start making whatever brands to get in on the circle jerk. Some subreddits I suspect go further and have companies working with moderators to do very potent marketing without anyone drawing attention to it. /r/mechanicalkeyboards, /r/gadgets, etc, the list goes on.

And my desperate attempts to become unbanned was met with paranoid skepticism and unfounded rebuttals. I had literally only posted one other time, showing off a meager 10 bottle collection. And somehow trying to help not only the community by sharing a discount code, not only starting a discussion about the aforementioned sauces, but also helping the struggling company. I didn't really see a problem with it. The community like the post and engaged, but somehow I was a first-offence marketing shill worth banning for life.

Not sure what i'm missing here. You clearly did not read the subreddit rules before posting. However the fact that you have no recourse after being banned other than debating a mod, that's a clear issue.

Google, Facebook and Twitter are all trying moderation by algorithm.

See regular posts on this very forum about algorithms running amok, banning people without clear reasons, and accounts only becoming reinstated by being famous enough to get enough traction to get a human to review your false positive.


The problem is the “assholes” never get told they are assholes, just mysteriously rate limited for some offense never explained to them, such as having an unpopular opinion.

That sense of powerlessness doesn’t mitigate things, it sometimes escalates them.


This problem is really easy to solve. Make your forum invite-only and have a ban on a user revoke the ability of the person that invited you to invite new users. Do not make the links between who invited whom public. It's for moderation only.

You can still have throwaway accounts for commenting on things and the overall community will be increasingly more civil.


The fact that so many people have the need for throwaway counts entirely disputes the concept that this problem is easy to solve.

The upvote/downvote system is designed to reinforce echo chambers and create a chilling effect for your “real” opinion if you know it doesn’t agree with the hive.

That system is already trying to solve the problem and instead creating a different one.

All you can do with an invite system is to make sure the echo chamber is even more structured. That “those people” aren’t even allowed to join. Hello country club.


In this system you could still have throwaways, but the throwaway has to be invited/created from another account. This way you can create as many throwaways as needed but if you behave badly with your throwaway and it gets banned, then your main account is prevented from inviting new users and creating throwaways. Trolls could create new, primary, accounts but someone would still need to invite them back in. I definitely don't think that this would eliminate them, but it could potentially slow them down.

And because the moderators have the tree of invites it's pretty easy to determine which branches are yielding troll accounts and prune further up if necessary.

> Make your forum invite-only

> You can still have throwaway accounts

How can both of these things be simultaneously true?


Say you or a friend has an account in good standing. Get an invite link with a unique token. Sign up with the link.

Done. The inviter doesn't know the username of the invitee. It's only visible to the moderation team who invited whom.


Assertion: A throwaway account is probably more likely to be banned than an identified account -- why else would anyone bother with throwaways?

If the throwaway account is banned, the original link-giver would lose their good standing. (sorry, I should have highlighted this to the original reply)

We can argue about whether the assertion is actually true, but even the perception that it might be true will make people reluctant to give out an invitation to someone wanting to create a throwaway account.

If I had my own named account in good standing, I suppose I might be willing to use it to create a throwaway account for myself, provided that I was careful to only use that throwaway for... I don't know... "lightly" controversial content that is only likely to be downvoted rather than abusive content that is likely to be banned. (Not that I would ever actually create a throwaway in order to be abusive! Just trying to think like a troll).

Actually, I guess that's what we wanted to encourage anyway, right? Controversial content should be fine; abusive content is not. Maybe this would work after all...


Case 1: Your assertion is correct. The system works as intended and bans decrease.

Case 2: Your assertion is incorrect. There is some other reason for making a throwaway (like talking about a former employer, for example) and the system works as intended. The throwaway doesn't get banned and nobody gets their account demoted.

Also these demotions could have a time element to them, where you can't invite someone new for, say, a year.


That would work. However it could also make the community hard to grow.

I surfed HN for years before I decided to make an account, and even now I don't really know anyone else using it in my circle. I guess I might have found someone if I asked around, but it would definitely raise the bar.


Making communities hard to grow is a feature not a bug.

Let us read our Shirky:

https://web.archive.org/web/20050615082335/http://shirky.com...

A well-tended community is constrained by things like Dunbar's Number and SNR ratios.

Soft forking is a common response - IMO reddit is the closest social platform to "getting it right", although it should do a better job of pushing low-value content (/r/politics, etc) down into the "minor leagues" of subreddits.


Yup. The key is restricting access. Another good option is to charge for an account. The Something Awful forums are the best large community I'm aware of on the Internet, and I believe it's mostly because of the $10 entry fee and the excellent moderators. When you have to pony up hundreds of dollars a year to keep shitposting, you tend to get fewer shitposters.

How would you account for people being invited, then immediately creating a bunch of alternate accounts to hand out to people/use for nefarious purposes later?

To beleaguer your "cleaning the pools" analogy: It is not as simple as just cleaning the pool of grime brought in through normal use.

The shit-peddlers outside the building are constantly convincing pool-goers to join a shit-peddling pyramid scheme so that they bring boxes of shit in to the pool.

And besides that, some people just want to trash the pool for their own enjoyment.


Sadly, I think you guys are at the intersection of wielding influence and being a target, but not really having any opsec/personal security (or at least you didn't think you'd need it when you took the job). Idk if GallowBoob tied his identity to that nickname himself at any point, but there's only so much reddit can do if he did.

Aside from the likely fact that “his” account is run by multiple people considering it’s his actual job to repost on Reddit... he definitely “outed” himself for news articles. He enjoys the fame and not the consequence.

This is a really interesting insight, thanks.

There are roughly two views on moderators. Moderators think that moderators are the thin green line protecting the ordinary users from a handful of bad actors, letting them live in blissful ignorance of the awful things that are happening. Ordinary users think the that moderators are power-mad bullies who ban and delete as they see fit, with no accountability.

Both of these things can be true at once!

This reminds me a lot of the situation with the police, who are to an extent the moderators of the big room [1]. 90% of a policeman's time is spent dealing with a small number of really nasty people. 90% of a member of the public's interaction with the police is a policeman exceeding their authority, not being interested in helping with a problem, etc [2].

If you spend 90% of your time dealing with really awful people, you will get messed up. You will develop an itchy trigger finger. Most of the time - most of your time - that's appropriate. But when you're dealing with someone who's not awful, you will get it wrong.

[1] http://www.catb.org/jargon/html/B/Big-Room.html

[2] Maybe that's not true of you. Maybe that's not true in general where you live. It is true in general where i live.


You're welcome.

> Both of these things can be true at once!

I've been on both sides of this. Last year, /r/portland began using the word 'criddler' to describe people in various states of meth-addiction and bicycle thievery.

The word was beginning to be used very often to disparage many different types of people. The moderators made a decision to ban it, and the community backlash was supreme. I let the mods have it as well.

After a few months, I came to terms with the label of 'criddler' and realized how it would come up in my mind as a form of negative judgement on a person that was not healthy for me. I realized their decision was a good one, and I described this turn around in my nomination thread running for moderator.

Your description of the situation with the police is apt and I have thought similarly many times. Most people have no idea what day-to-day police work is.

If it was possible to see the challenge of dealing with keeping the peace there would likely be more compassion for people when mistakes are made or there's anger about moderator decisions that affect the community.


The 'criddler' word seems humorous. Is there any particular need to ban it outright? These kinds of neologism happen all the time in real life and we ask people to 'tone it down' or give some other social signal that the joke has gone too far. The social feedback is immediate with breathing room, nobody is ejected from social areas, but you know they might be eventually. The joke naturally deflates as the last dregs of fun come out of it, and we all move on.

Someone famous once said (100yrs ago) that when people write to you they want to test their mettle. It is often best not to reply until a week after the letter was sent and to find the fire in the original message has since left the person who sent the letter.

These instantaneous and unpredictable word bans seem to be almost the perfect design for getting the maximum amount burn out of a given burst of new flame. Is the lone moderator too far removed from the pulse of the crowd to give effective feedback?


You'll find plenty of debate on this specific subject in the sub's post about it.

The word was humorous to me at one point as well.

However, it was being applied in general to homeless people.

Portland, like many cities, has a serious and growing problem with homelessness.

The word was being used to demonize and unfairly group drug addicted folks, people with mental health issues and those who are not those things but still homeless.

I do not know if there was a need to ban it outright, however I think there were reasons to want to.

To some extent, sub-wide decisions like this are taste-profile questions and with taste you will not get agreement.[2]

For example, Craig Newmark had to walk a fine line with adult services on Craigslist.[1] Many craigslist users wanted those forums. Many did not. Ultimately, I think decisions like this give character to the community.

So with /r/portland, banning 'criddler' wasn't just about banning a word, presenting a posture of the sub toward homeless people in general.

[1] https://en.wikipedia.org/wiki/Craigslist#Adult_services_cont...

[2] One way to solve this is to allow the sub to elect moderators, which is what /r/portland did. In this case, I did say that I was for the ban on the word and was still elected by the community that was angered by the ban.


Is there any other tool you can use? A warning that it will be banned then lean into it a week later? Mark individual words automatically inside posts with a symbolic warning?

That's what I'm getting at, I should probably do research on my own time. Thanks for doing the good & dirty work of moderating a subreddit.


Yeah! Shoosh already!

Don't drag the debate in here!

You might as well wave a red flag in front of a bull! Or pour gasoline on a tire fire!

(Meant humorously with no disrespect.)


Just interested in ideas. I'm not going to tell you were I got that old quote from, it has an incendiary speaker! :P

Portland (and Seattle) have had such a terrible homeless (and drug addict) problem for decades that it's been reported on in various PBS shows such as Frontline several times over the years. It even spawned an entire music subgenre in rock that gave birth to the likes of Nirvana, Hole, Mudhoney, etc.

Heh, the dressing like a homeless logger/fisherman trend wasn't a show or a fashion statement.

In my area, people complain that there are homeless people 'now', when I was a kid, fishermem would come home from Alaska with thousands of dollars of cash in their pockets, buy a tent and a sleeping bag and just camp right next to the bar. Lots of couch surfing. People had bars on their windows.

These PNW cities have always had a lot of drugs and poverty and depression. Cities built on logging, fishing, lumber mills, paper plants, shipping ports and airplane factory work.

The tech hub vibe and the people who moved to here from the suburbs see this as new. And in a way, it's been exacerbated by skyrocketing rent, but part of what we're seeing is a lack of places where these people could hide in plane sight.

They've always been here. This is a subculture that has always existed.


You're only partly right, the problem more is exceptionally worse than when you were a kid.

I'm 4th generation Southern Oregon and when I was a kid (20 years ago) I rarely saw homeless people in Medford, they either were much more reclusive or didn't exist. Ashland is it's own case however and we would play the game "Hiker or Homeless."

There's a bike path that runs from Ashland all the way to... Well, at least Central point, or maybe Gold Hill? It's about 25 miles. The Medford sections are now overflowing with tents and homeless camps, enough to make it dangerous for families and children. This was unthinkable twenty years ago.


So what, dumbfuck. You don’t need to protect GROWN ADULTS from the fee-fees they get when they see a word, Jesus Christ. Fuck you.

The 'criddler' word seems humorous. Is there any particular need to ban it outright?

From my experience in the past, moderating doesn't scale at all. So you basically have to pre-empt issues that tend to lead towards requiring mod actions. An example of this would be the auto-mod stuff on Reddit (though I don't have any experience with the specifics here, just assumptions).

Another approach is to root out the "trigger" words that tend to escalate arguments quickly, and are generally a sign that a discussion has just become a battle of insults instead of a useful discussion.

Again, because its not just one situation between users, it's potentially dozens of arguments across a site between dozens of users. You can't wait until after things get out of control, because there isn't enough time in the day to clean stuff up in that manner. And people want clear rules for what they can and can't say and rules like "Don't be disrespectful" aren't usually super helpful, especially against people who are more malicious in their interactions.

So, calling out specific words that are banned allows for a simple, straightforward rule that can be pointed to without leaving lots of wiggle room for arguments (which again, you don't have time to have with every user who complains).


Excellent points. You also get at a key issue around volume of moderation tasks.

Usually the problem is, that you don't want to ban the user, but the behavior. And - reddit being mostly a textual thing - it is represented by words. Banning the user is unfair, plus they can just create a new account anyway.

So you try to be alert for people who use certain phrases. Racism, xenophobia, conspiracy theories, paranoia, extreme pettiness, severely abrasive attitude, choosing beggars, and so on.

And flame wars, and people jumping on others. And of course if there is a group of people showing such unwanted behavior it takes no time for the powder keg to go off.

I think having sort of simple but a big ambiguous rules are okay if there are very broad but exact rules too. (Eg. Rule1: don't be a dick, Rule2: no harrassment, no xenophobia, no posting of personal information, etc.)

Moderators and the community should be proactive. (I find that the best tool is reports, because that signals what people find problematic. Sometimes they are just annoyed that some newcomer posted that again. That's okay, but most of the time users report spammers, crazy serious racists with too much time, and the occasional lost redditors' posts.)


This is almost off topic... but can you explain criddler?

I think I’ve spent enough time around people on meth and bicycles to probably understand it - but I just don’t get it yet.


It's an insulting term for scruffy looking people who are presumed to be homeless or drug-using.

The reason it's contentious is that one person uses it, someone else objects to it, people take sides along left-vs-right lines, and the thread devolves in to a shouting match.


I guess I meant the etymology of the word.

It seems it’s pretty well established though, from 2010:

https://www.urbandictionary.com/define.php?term=criddler


r/portland is one of the worst cesspool subreddits I've seen, and has had a history of really bad moderators over many years. I likely haven't read anything in r/portland since you became a mod, so I can't comment on how it is at the moment, but something about a liberal city draws out the worst in people on that subreddit.

but, "criddler" is much older on the sub than "last year", it's been a mainstay for at least 10 years. and Portland doesn't have a police department, they have a police bureau (sorry, that one's a pet peeve).


Jerry, I'd welcome you to come back and see how things are now. I understand there was some moderation in the past that was difficult but that a change went through some time ago.

The group of mods running the show now seem to me good people and having seen how they operate I'd be impressed if other city subs of this size are doing a far better job.

I realized after writing the above that it was likely an older word than that, but it hadn't really been a growing memey word in the sub before last year that I had noticed. Maybe the year before? Time flies.

And thanks for the note on the Portland Police Bureau. Too late to update my post, but I'll try to remember that one.


just went and looked, many of the most popular posts are still a cesspool once you dig in, and the moderator list looks little changed from 10 years ago. I'm glad you're enjoying yourself, and have drunk the koolaid, but this just reminded me why even r/politics is a nicer place to visit.

Subreddits for any location go completely against the grain of reddit. The point of a subreddit is to group people with a similar interest, but the people who live in a city don't have that shared interest, they're just in the same place. Even on the bigger side, you end up with this cross section of the entire site, shrunk down and fragmented so that it's too big to ignore the inevitable arguments between the different factions, but also too small for the moderation to be any good.

Since the concept of a city subreddit is so obviously pointless, the only people who stay are the ones who like the conflict, exacerbating the problem; or in worse cases like the Canada subreddit, there is a hostile takeover by one side. If you want to talk to people who happen to live nearby, I think you should do so on a different website; reddit doesn't work at all for that usecase.


Are you saying that people who live in a similar location don't have a shared interest in that location?

I'm not sure what point in time you're referring to, but I've been subbed there for years and I find the content to be above average in quality (although not great) relative to other subs. If I had to rank it, a very rough ranking would be somewhere between r/diy and r/technology in terms of quality. Shit-posting is on just about every sub, so I don't think it's fair to base it entirely off of the handful of trolls r/portland has.

EDIT: just noticed you mentioned "liberal city", so if it's political stances you're referring to then yes a more conservative-leaning person may not feel welcome on r/portland


> just noticed you mentioned "liberal city", so if it's political stances you're referring to then yes a more conservative-leaning person may not feel welcome on r/portland

funnily enough, I'm referring to the sub often times having a more conservative audience, often coming in from the suburbs (Vancouver being a big one there).


Ah ok. Then that goes back to my last sentence - there's a handful of trolls and I think always will be, but the majority of what I see is pretty decent.

> After a few months, I came to terms with the label of 'criddler' and realized how it would come up in my mind as a form of negative judgement on a person that was not healthy for me. I realized their decision [to ban everyone's use] was a good one

Note how your conclusion doesn't actually follow: just because something isn't healthy for you, doesn't make it unhealthy for everyone else.

Trying to police too much is likely why moderators get such a backlash when they employ heavy-handed tactics like outright bans.

Moderators also very likely operate in a like-minded bubble: the sort of people who would volunteer to be moderators are far more likely to have more in common with each other, than with the average community member. As such, mods will likely always get backlash.

For instance, I'd bet it's far more likely that your fellow mods debate to what extent language should be policed, and there are probably almost no mods questioning whether language should be policed at all.

Some kind of moderation system with strictly limited mod terms and random promotions to mod status would likely improve relations, but achieving the right balance would be challenging.


Like moderator jury duty?

Yes, there are many possible options here for random mod status, limited terms, etc. I review a couple here:

https://news.ycombinator.com/item?id=23263181


> Trying to police too much is likely why moderators get such a backlash when they employ heavy-handed tactics like outright bans.

This logic doesn't follow. If you don't police enough, then users have an expectation that anything goes. When you do decide to moderate behavior, you experience a backlash because you've changed stances.

If you moderate too much, then you get accused of being heavy-handed, fascistic or trampling over their free speech rights. When you're a moderator, there's often very little that you can do that won't generate backlash unless the user in question is so toxic that the community as a whole agrees they need to go.

Your idea of random promotions to mod status w/ limited mod terms is an incredibly bad idea as well, because the moment a bad faith actor gets promoted to mod your entire community will quickly go up in flames. Trust me, there are people that will pretend to be in good faith for months, years until it gets them a position of power just so they can burn it all down.


> If you don't police enough, then users have an expectation that anything goes. When you do decide to moderate behavior, you experience a backlash because you've changed stances.

You experience a backlash from some users, sure. If most users consider it a positive change, then no big deal.

> When you're a moderator, there's often very little that you can do that won't generate backlash unless the user in question is so toxic that the community as a whole agrees they need to go.

This seems to be assuming ban-like tactics. They're a poor option and I don't favour them.

> Your idea of random promotions to mod status w/ limited mod terms is an incredibly bad idea as well, because the moment a bad faith actor gets promoted to mod your entire community will quickly go up in flames.

You're making a lot of assumptions:

1. There's an innate equilibrium between the cardinality of the mod set, the probability that it contains a bad actor, and the probability that actor can cause appreciable disruption. The larger the mod set, the less likely a bad actor can do anything meaningful. You can likely make this probability arbitrarily small if that's a likely threat model. Tandom appointments work quite well to increase efficiency in various consensus-driven systems [1].

2. Mods shouldn't have absolute power. There is simply no way that a single mod should be able to destroy a whole community, any more than a bad judge in the legal system, or a single bad politician could destroy a city, county or state.

3. A transparent appeals process is always needed, in which other mods and community members review mod decisions. It took us millennia to develop our robust legal systems. Technology can eliminate some of the bureaucratic inefficiencies of the legal system in this setting, but it still contains robust patterns that should be copied.

4. You're assuming an open signup process which is vulnerable to DoS/brigading tactics. Maybe there's a way to allow open signups too (with reputation systems), but it's not strictly necessary.

5. Various reputation systems can be overlaid on this, and this interacts well with transparent judgments/appeals process, ie. someone with a long record of violating conditions and losing appeals would be less likely to be given mod power (but never 0%).

In general, today's moderation systems are intentionally vulnerable to a number of problems, because sites optimize for growing a user base rather than fostering community, because that's how they raise money.

Consider something akin to stack overflow, which randomly shows you messages to review or triage. Every now and again you get 5 messages or mod decisions to review, and you vote your approval/disapproval. This narrows the gap between traditional mod status and users status, where true mods would be relegated to reviewing illegal content that places the whole community in jeopardy.

Of course, there might also be considerations for avoiding the tyranny of the majority, but my point is only that the space of possible moderation strategies is considerably wider than most seem to think.

[1] https://www.sciencedirect.com/science/article/pii/S037843711...


I mean what you're arguing is to essentially turn forum administration into miniature governments. And I would hope that as reality has proven, government is very easily gamed by people seeking power. You seem to make a lot of assumptions which, as someone that has acted as a moderator in the past does not bear out.

Have you acted as a moderator before? What are some communities which you believe have the idealized form of moderation? Because even Stack Overflow as you've referenced has issues with high toxicity among users chasing off moderators all the same.


> I mean what you're arguing is to essentially turn forum administration into miniature governments.

Forum administration already is a limited government, typically authoritarian in current incarnations. Mods are the police, judges and juries. Works fine if you have the resources and mods are fair, and maybe that's typical for minor infractions, but the conflict of interest is clear.

Authoritarian moderation doesn't scale though, and you disenfranchise a lot of people with every a mistake, particularly since a) there's rarely a transparent appeals process, and b) people don't typically like owning their mistakes. Doubly so when "it's my platform, so I can do what I want with it". Maybe that's not something you care about, but given the increasing importance of social media to democratic government, it's a problem that will likely worsen.

> And I would hope that as reality has proven, government is very easily gamed by people seeking power.

Government is a system of rules for governing people's interactions. A moderation system is a system of rules for governing people's interactions on a specific site. You can't speak of these things as if they're that different. Either a system of rules is vulnerable to exploits, or it's not.

> Because even Stack Overflow as you've referenced has issues with high toxicity among users chasing off moderators all the same.

I mentioned stack overflow specifically for the unintrusive and random review process and nothing else. SO doesn't feature any of the other ideas I listed.

Finally, I have no "idealized form of moderation" in mind, I have knowledge of where existing systems fail, and how other systems have already addressed very similar problems. Designing a secure and scalable moderation system is a big task, so if you want to hire me to research and build such a system, then I will be happy to address all of your questions in as much detail as you like.


Also politicians. They too are trying to fight the good fight and help their community. Pretty much any position of authority attracts a certain mindset yes, but also changes the person who assumes that authority.

> There are roughly two views on moderators. Moderators think that moderators are the thin green line protecting the ordinary users from a handful of bad actors, letting them live in blissful ignorance of the awful things that are happening.

You can also split moderation along another axis, dividing on whether moderators are curators or janitors.

The janitorial view would be that moderators should be generally hands-off, acting only in clear-cut cases of abuse or spam. The community does or should mostly run itself through social norms, and heavy-handed moderation is unnecessarily and unfairly restrictive.

The curation view is that moderators serve an active role in building the community, and they should generally act strongly to define and preserve community norms. The community is the moderators' space first, and a free space second if at all.


The analogy is really insightful, but also makes me slightly uneasy. When a moderator gets it wrong, you can't comment on an internet forum. When a police officer gets it wrong, you're dead.

The latter is why policing often fails in the absence of serious, dedicated, time intensive protocol development and (more so) training dedicated to countering this effect.

> When a police officer gets it wrong, you're dead.

Or, as often happens, they're dead. I don't envy members of a profession that have to get every life-and-death decision right in real time on a daily basis.


That's a fair point, although statistically the job of police officer doesn't crack the top 10 in most dangerous careers.

Similarly, a vanishingly small proportion of citizens (as opposed to bad guys) have a life-altering bad experience with the police.

It'd be interesting to see real numbers, but I suspect officers on the beat get the short end of that stick.


Which is why we, in principle, regulate the police much more than moderators. The classical ideal of policing is that the most the police can do is apprehend you and put you in front of a judge and jury. The jury are specifically not people who spend all their time with criminals, so they should be able to act as an effective check on the policeman's instincts. Then we have all sorts of procedures and laws and rules of evidence and oversight and so on, to try and guide the police into doing their job well, and protect people who are caught up by mistake.

Moderators operate purely on the Judge Dredd model.

But then, in reality, the police operate in a 90% classical, 10% Dredd sort of way (for values of 90% and 10% that vary by location). They can mete out small punishments without going to court, oversight is not very effective, etc.


Having been a moderator of a large, 60k-ish person subreddit for a couple years, I can assure you that that's not the case: removing spam and other obvious rule breaking (sending users death threats, racial slurs, etc) took no more than a few minutes a day. Well over 90% of my fellow moderators' time was spent exceeding their authority: harassing people they just disliked, removing posts for personal biases, and so on.

It's a job that attracts that sort of person, people who just want things to function smoothly will get fed up with it quickly, the only people who stay at it will be the ones who want to mold the community to their vision (whether it wants to be molded or not).


>If you spend 90% of your time dealing with really awful people, you will get messed up.

I try to remember this when interacting with law enforcement. If you have to call the police, you're having a bad day. If you're a police your job is dealing with everyone's bad day.


> 90% of a member of the public's interaction with the police is a policeman exceeding their authority, not being interested in helping with a problem, etc

What complicates matters even more is that the interaction usually involves fear at least on one side. In a peaceful society, a regular citizen doesn't interact with police officers day to day. If you capture the attention of an officer, it usually means either something bad happened to you, or you're suspected of doing something wrong - both of which put you in a "fight or flight" state. I suspect that the most common interaction between a westerner and a police officer involves said citizen breaking traffic rules, which is a pretty antagonistic situation from the start.


> Moderators think that moderators are the thin green line protecting the ordinary users from a handful of bad actors, letting them live in blissful ignorance of the awful things that are happening.

As a moderator on another forum (not reddit), I disagree. Many of the moderation actions I take are visible to ordinary users, and that's on purpose. For community norms to be maintained, they have to be visibly enforced: enforcement is not just for the particular bad actor, but also for the ordinary users who genuinely want to respect community norms, so they know what the boundaries are (and therefore know how to respect them), and can see that people who violate those boundaries are dealt with (so they have confidence that the norms are meaningful).

> Ordinary users think the that moderators are power-mad bullies who ban and delete as they see fit, with no accountability.

Again, I disagree. If moderation actions are often visible, as above, ordinary users who genuinely want to respect community norms can see what is actually being enforced, and over time, if the moderators are doing a reasonable job, ordinary users will see that the pattern of enforcement reasonably matches the stated norms that are supposed to be enforced. Most ordinary users are reasonable and don't expect perfection, but they do expect consistency and reasonable judgment.

The biggest problem I see facing good moderation is that moderators can't be everywhere at once. One way to address that is to give ordinary users a way to report problematic posts, to bring them to the attention of the moderators. That also gives ordinary users another way to see how moderation is being done, because they can see what is or is not done in response to their reports.

There will always be some people who are never satisfied, and who will find something to complain about no matter what moderators do. But I don't think most ordinary users fall into that category.

Also, about "no accountability": ultimately, as a moderator, I'm accountable to the owner of the site. Similarly, the HN mods are ultimately accountable to the owners of HN. So the corresponding question for reddit would be, who owns reddit?


This assumes good faith tested to the point of cynicism, on the part of the police, when it's been shown time and again that police are indoctrinated into a regime of institutional bias from the moment they enter the force.

I see the logic in your statement, but it's an imperfect model.


I used to mod /r/answers. It took over my life, in a bad way. There is (or was) next to no support from paid staff on serious problems.

Why do you keep the post?

I've only just begun and I don't walk away from challenging circumstances quickly.

The senior moderators are faster and better than me, so my actual workload is relatively light.

I believe Reddit will grow in its influence and that tools can and will be developed to improve the situation for community moderators.


There is no way to get unbanned on Reddit, even if your first post is innocuously misplaced, there is simply no mechanism to get unbanned. So what do you suggest people do?

Not stalk and harass moderators.

How will this result in an unban? Nobody ever gets unbanned. What you said is emblematic of the problem.

If you believe stalking and harassing moderators is a valid response to anything moderators do, you shouldn't be unbanned.

Not at all what I said, I asked how will passively sitting by result in an unban? You are simply confirming the fact that there is no way to get unbanned, even if the moderators made a mistake, or if the person wants to actively participate. Reddit basically says if you get banned you must make a new account if you want to participate in that board still. So it's natural people make lots of accounts. Is there a mechanism to get unbanned? I'll ask you again.

Stalking and harassing moderators won't get you unbanned; instead, it confirms that there should be no place for you anywhere on Reddit.

Hi, quick question - do you think stalking and harassing moderators will get me unbanned?

You keep responding with completely unrelated text.

Anyway, you're clearly projecting, I've never bothered a moderator on Reddit, they are all on powertrips and think that America doesn't apply there.

Create a new account?

It's against the TOS! What are you, a savage?!

"Reminder from the Reddit staff: If you use another account to circumvent this subreddit ban, that will be considered a violation of the Content Policy and can result in your account being suspended from the site as a whole."

Message the mods politely. If you made a mistake by posting something against the rules, apologize for that. Usually, mods are happy to unban you in that case.

But some mods become really jaded over time and are mean to users by default. I was moderator of a few mid-sized subreddits for a few years and saw that in action. A common thing I saw is a sort of purity mindset, where a moderator feels like the more people they ban, the better the subreddit becomes. It’s really easy to dehumanize people online - users do it to mods, and mods do it to users. But those bad moderators make users hate mods in general, and the cycle continues.


>this problem is bigger than Reddit and focuses on lack of enforcement for digital actions that would qualify as genuine crimes of harassment if translated into the physical realm.

Part of the problem is that online communities rely on the fallacy that, because most people are good, such communities can thrive with some reasonable amount of guidance. But it only takes a relative few people acting in bad faith to cause great damage.

As such, I've long considered that the online world amplifies the sociopaths, bad actors, and worst among us to an untenable level, giving them outsized power and voice. I think it's driving our "real" society and culture in a negative direction to a degree that people vastly underappreciate.

Layer on top of that adversarial nations that actively use our online communities to divide and propagandize, and there's a very real question of whether we're better off without these communities.

More succinctly, many of the largest platforms that enable online communities seem consistently unable or unwilling to rein them in. And when you look at experiences like those you convey, it's little wonder. Attempts to moderate bad actors will invariably add to the problem, as they devolve the discourse further and generally seek to be heard, else burn the place down.


You make a good point. All online posts are equal, and that's a terrible thing because on popular sites a shouty bad actor looks just as credible as a real expert - especially if the former relies on dogwhistle rhetoric, deliberate confusion, and superficial point-scoring, and the latter is trying to respond with facts and reason.

Good moderation can keep out bad actors and promote an adult and civil tone, but it's insanely time-consuming and expensive.

There really needs to be a new legal concept - something like poisoning of free speech. It's one thing to have unpopular and unusual opinions and to argue them, but another to set out to knowingly and deliberately subvert and poison communities with calibrated lies and aggression.

There is no upside to the latter, and the face-to-face equivalent would usually have consequences. Free speech advocates online seem to believe that online communities are somehow magically strong enough to handle these threats automatically - but in reality they can be more fragile than face to face communities. Some formal appreciation of that might not be a bad thing.


> but it's insanely time-consuming and expensive.

one of the few cases where online and offline worlds converge on a common modality to deal with common fundamental inputs. you don't know the valence of a new group member until they choose to reveal it to you, when it most favors their leverage.. This (among other things) tilts the odds of successful group influence in the favor of the new comers. Freedom isn't free and all that.


In person, people value civility to keep the shared benefit of social groups. In the digital world, you can easily find another group so lesser value is placed on civility. And that turns easily into the division you see in today’s digital world and society. And if you threaten the integrity of communities and societies you threaten the state and country upon which those are built.

Thanks for sharing. Kinda sux, especially for all the work moderators do (and thanks to dang for the hard work on HN!)

Free speech is hard. Really hard. Allowing communities to express themselves in person (aka pre-internet) was much easier, because there was a cost to appear in person (time, society's perceptions, etc).

With the internet, truly anonymous speech has flourished, and much of it really is a waste of time or even damaging.

I hope as a society that we will be able to figure out this conundrum without eliminating free speech. I think that just like credit cards accept some bad debts, we have to be able to accept some bad actors in speech - the trick is limiting it without killing free speech.

To me, communities like reddit, HN, FB, twitter, etc are all huge experiments in free speech and how to manage that problem. Hopefully it turns out right - I don't want the future to look like East Germany.


I’m baffled what could motivate someone to get so worked up or angry as to go find strangers from the internet in real life and I’m sorry that’s happening.

As a side question: As a moderator, you basically create the tone and feel of a subreddit. If a subreddit is successful, it's because of the moderators.

When reddit goes public, moderators get nothing. The employees of reddit will become defacto IPO millionaires, but moderators who create the success of the site, get nothing. Has anything been discussed along these lines about how unfair this is?


Some mods have already openly discussed taking their subs entirely private and invite only in the event of an IPO. Reddit gets no ad revenue from private subs.

What in the world is wrong with people? Why do people expend so much effort doing these things?

It'd be awesome if we had anonymous reputation scores, like PGP or something, that other people could vouch for. Make bad actors pay to build up good reputation, then burn it down when they misbehave.

I really want a system like that to help filter the deluge of comments anyway. Though there's a danger of forming a filter bubble, I want to see commentary that is vouched for by those I respect. Not just on Twitter, but everywhere. As a protocol or data exchange format.


> It'd be awesome if we had anonymous reputation scores, like PGP or something, that other people could vouch for. Make bad actors pay to build up good reputation, then burn it down when they misbehave.

The whuffie concept relies on strong online identities. Without that, declaring bankruptcy is too easy and allows malicious actors to simultaneously harm others and boost their own scores rep with bots.


> performing anti-ban-evasion

Honestly, this is something Metafilter did pretty well: the five dollar account fee makes a ton of abuse tactics more costly to deploy, while simultaneously funding efforts to combat it.

It would go a long ways I think to at least allow mods to restrict subreddits to read-only for unpaid accounts.


100% agree. Metafilter has other major flaws that prevent if from having a meaningful community size, but the barrier to entry is a crucial mechanic that keeps out bad actors and mentally juvenile people.

>While there are things Reddit can be doing to help

one thing reddit could be doing to help is to stop relying on community moderators for so much. volunteer community moderators shouldn't be dealing with people abusing the reddit platform, that should be the job of reddit's staff. Instead of just building tools, reddit should be actively involved in the enforcement of platform rules to handle these cases, and leave the community moderators to focus on creating and maintaining their communities - ensuring content fits the theme of the subreddit, people are communicating with each other in a tone that fits the intended tone of the subreddit, etc.

Reddit needs people like gallowboob who are willing to do the drudgery of sifting through all the platform abusers, but from an end-user perspective it's easy to see somebody moderating 80% of the big subreddits as a problem, because it's not clear whether a moderator is actually influencing the community or just sifting through obvious abuse.


Nobody needs a karma-whoring, reposting, content thieving, paid shill like gallowboob anywhere. He's probably the worst example you could have chosen.

I know what you're talking about. I was also a mod myself on /r/Suomi (Finland) and quit after a rather serious incident. Other mods and users on far-right subs started to compile a list of European subreddit moderators whom they deemed 'SJWs' and tried to gather as much personal information on everyone. Someone from that group leaked the documents to us and it was quite alarming. The scale, effort and hate that went into those documents was the most surprising thing.

Anyways, when our main mod got approached on the street and asked if he was /u/nickname on reddit and he replied yes, he got stabbed. That was the day I quit.


Horrifying. Is there a news story on the stabbing?

I’m sorry you and your friend dealt with that.

It’s not just geo subs that have this problem. A friend of mine who mods a number of comedy subs (of all things) showed me his mod mail that appeared to indicate systematic brigading from far right discord servers.

They post extremist content in a bid to get the sub banned, and if it fails, they escalate harassment campaigns on the moderators


This is the best argument I've seen for making it publicly anonymous which moderator took a specific action and maintaining a large pool of moderators.

Bad actors can harass a community as a whole (and they do) but it's much harder for them to target specific moderators. The flipside is that moderators have to hold each other accountable for impartial enforcement of the rules.


We are solving these problems at - https://district.so. Dealing with trolls can be draining and it's a plague for a good community. We working on solving that.

@bredren Would you want to lead /Portland District?


You don't describe your product or service on the site.

As such, this is spam for a vague mailing list.


I think one of the major failures of social businesses on the Internet is that their business need for massive user growth forces them to upend basics of social interaction that we have taken for granted in the real world for eons.

If you create a new subreddit, anyone can show up and participate. Good actor or bad.

Can you imagine if you threw a physical party and let every single stranger in the world literally walk in your front door, wander around their house, and do what they want? You'd be a honeypot for thieves and vandals.

On the Internet, they don't take your physical stuff, but they harm the online equivalent: your information and attention. Every popular online forum is likewise a honeypot for scammers, shady advertisers, griefers, and other malcontents. Why wouldn't it be?

The typical suggested option is massive policing, but that's really hard in a world a bad actor can don a near-perfect disguise (create a new account) in an instant.

I would love to see sites like Reddit and Twitter adopt a model closer to real-world social interactions: something like a web of trust where you need to be invited to participate in a space and where there is some level of vouching for you. But those kinds of businesses don't scale well so we almost never see them.

Another option would be to force something like real identities. The reason a bar can get away with just a couple of bouncers is because a bad actor can't as easily escape the consequences of their actions by shedding their identity. But, of course, requiring real identity gets in the way of good actors who use anonymity for good reasons (generally avoiding other bad actors).

It's a hard problem, but it makes me sad that almost every business just ends up doing "let everyone in and accept that there's going to be shitbags all over the place fucking it up for everyone".


Yikes. I am the moderator of r/Metric among other medium sized reddit subs. It turns out there are people who really hate the Metric system and see it as part of a globalist conspiracy. All of the behaviour you describe is familiar, though thankfully hasn't risen to the level where police needed to be involved. Our worst offender has thankfully recently focused on Brexit and left us alone.

I totally agree that lack of response to ban-evasion has been a problem. Even Wikipedia does better than Reddit at managing user bans.

The amount of damage that bad actors can do on reddit is disappointing. It also doesn't seem to be getting any better. In the 9 years I have been modding the same problems are still present.

The poweruser situation is also frustrating. The reddit moderation system and the ranking of moderation power by seniority really encourages cabals. It has been tough to resist on some of the subs I moderate for them being taken over by power users. You invite one mod to help with spam or problem users and all of a sudden you have 5 more mods, all of who moderate dozens of subs. Slowly the power users move towards top mod position and they never bring in fresh mods, only other power users. They will also try to remove any existing top mods to entrench their control. The system is killing the democratic nature of reddit by concentrating moderation into a narrow group. Given reddits position that moderators effectively own the community it makes it very difficult to resist this kind of takeover. My advice: never invite anybody who is already modding more than 100K users in other subs to be a mod on your sub.


You sound like a proud tyrant gloating about his high position in the hierarchy, whereas he sounds very upset about something very bad you and your ilk did. In a healthy world where might makes right and sheer unbridled ambition is rewarded with success, that “troll” would be able to kill all of you and usurp your role as moderator.

I can second this type of experience. When I (more actively) moderated the subreddit for my university, there were a few repeat offenders who took most of our time, and one who went out of his way to try and dox and threaten me specifically (which wasn't hard since I ran the account under my real name), and who I finally dealt with by involving the police.

Politically motivated reddit people are, or at least can be, scary.


Does min account history + min karma to comment enforcement helps?

It's way too easy for bad actors to create throwaway accounts on Reddit. If I remember correctly, there's no captcha and no email confirmation required. I suspect a large swath of Reddit activity is bots and paid political actors.

Reddit allows bots because if it weren't for the product placement, the site would fall into obscurity.

One of my posts to r/Portland was incorrectly removed with the excuse "not related to Portland." in reality it was related however the post was contentious and I was prevented from re-posting. The moderator didn't even pretend to acknowledge a genuine rule that was broken, just simply told me not to post it. After I complained I was muted from contacting the mods.

Not to blame you but this is the kind of BS that makes me uninterested in the Reddit community, and uninterested in Portland in general, rife with cancel culture.


>cancel culture

The only people I see using this phrase in the way you're using it are people who have consistently behaved in such a way to be "canceled" from multiple social groups after being asked to modify their behavior that adheres to a low bar of social norms and etiquette. Do you fall into this category?


No.

Do you mind sharing the screenshots of the conversation with the /r/portland mod team as well as the article you tried to post?

And if you haven't been cancelled from different social groups in portland, how do you know about how rife the cancel culture is? Can you explain a little bit about how it has affected people you know?


I'm not sure what benefit sharing the screenshots would have, I'm not trying to out the mod or get vengeance.

The video I posted was about a Lake Oswego woman going nuts and verbally abusing some local PD who were handling the situation with elegance. My title was something like "this is why I don't trust when people senselessly dog on the police." It was removed because there wasn't anything specific to Portland or surrending area (not true but ok). I told the mod I would just repost with the title "this is why I don't trust when Portlanders senselessly dog on local police" and was told not to post. No rules broken, just intimidation and censorship.

I don't subscribe to the progressive/liberal/left/socialist ideology and I'm genuinely afraid to publicly share that fact in Portland. There are many people in PDX who will violently attack you if they catch wind of that fact. This isn't a secret. It's also not a secret that Oregon outside of PDX and Eugene are generally Red communities.

For instance, look at the protests in Portland last year. The counter-protesters were far more disruptive and violent than the visiting protesters, but the typical Portlander wouldn't see it that way. In my own discussions with them, they see Trump supporters et. al. As being fascist and as causing violence simply by existing, thus justifying their pre-emptive violence.


>I'm not trying to out the mod or get vengeance.

Getting the actual conversation and actual article you tried to post gives a lot of context into why you believe you were cancelled. I don't want you to out the mods, I want to understand their side of the story.

I've heard your side, and it sounds like the content didn't fit the vibe of the subreddit, as per the mods. They told you that and you threatened to break their rules again. Yes, you were threatening to break their rules.

I'd say they gave you a chance, and given this particular story re: mod harassment, didn't want to engage with an aggressive user.

> No rules broken, just intimidation and censorship.

A moderator of a forum asking you to not do something is not intimidation. It's asking you to adhere to the vibe of the subreddit. The description of the article you tried to post was obviously an attempt at rabble rousing on your part. There are helpful and constructive ways to voice your opinion. If you have consistent trouble doing that, I would consider therapy to try and understand why people see you as aggressive when you try to share your opinion.

There's nothing wrong with asking for help when you're struggling to fit in socially, especially if your opinions diverge from the norm. Being different is HARD.


Which rule is "vibe" again?

Thanks for looking into my Reddit post creep, I feel totally weird about you now.


>Which rule is "vibe" again?

The one where they said "Don't do this. It doesn't fit here" and you said "I'm going to do it anyways!"

It's very clear why they didn't want you to participate in their community now. You're aggressive, stand offish, and resort to name calling. I wouldn't want you around either.

Go to therapy. It will help.


"It doesn't fit here" and "vibe" are in your head, none of that was ever discussed. The mod said "it doesn't belong here because it's not related to Portland" which is objectively false. As in, factually wrong. And I never said I would "do it anyway", I was trying to find a title that satisfied the mod.

I'm tired of this conversation anyway, just another corrupt person in power bending rules for their own favor, covering up injustice, lying to affirm their position, and creeping on people cross-social media. I would even bet you and your mod pals have put me on some kind of watch list. It's disgusting and shameful really, and I'll be calling out moments like this as long as the law lets me.

My post objectively didn't break any rules on /r/portland.


/r/portland is definitely a challenging place to moderate. Frankly I've never seen a local city sub quite like that one.

> stepping into stopping out of control discussions.

I mean, if your goal is to "control" conversations, yeah, that's going to be a lot of work. What's the point of that anyway?


Maintaining a high signal-to-noise ratio so that the platform continues to be useful and interesting.

> freshman moderator of /r/portland

You poor soul. I stopped visiting that sub a few years ago. There was a pervasive negativity about it that was just depressing. I hope it's gotten better.

> [...] the Portland Police Department

BTW, it's Portland Police Bureau (PPB). Calling it PPD will get you labeled a transplant. ;-)


are you actually banning people or using automod to automatically remove their posts?

There's no benefit to banning the bad folks, since, like your guy, they just create a new account.

    author:
        name: [baddude1, baddude2]
    action: remove
    action_reason: "Troll / Spammer"
In my subs, this one condition does the most work. I also filter off low karma and new accounts to prevent spammers, trolls, etc.

I became a mod for one of the larger subreddit communities (10M+ subscribers) for similar reasons: I enjoyed the community and wanted to help out.

Same platform, but the experience couldn't be more different: we spend almost all our time dealing with "shallow" problems at a high scale. For us that means flamewars, brigading and (usually political) astroturfing, and to a lesser extent commercial spam. We don't even remotely have time to go looking for deeper problems such as ban evasion, because the volume is so high. We know it's probably happening but it's almost impossible to police because it disappears against the everyday noise.

One thing that I think helps us a lot is heavy use of automoderator rules and the spam filter to remove the most obvious problems automatically.

I think smaller and more local communities breed a very different class of problems than big ones -- it's much more personal for the participants, and their retaliation is more personal as well. Also, a single abusive user can do so much more damage in a small community because they don't just blend into the background noise.

One constant though is that we do see some really troubling things as well.


Experienced same problems with aggressive individuals or groups trying to engineer or buy there way into digital properties. Found the whole experience very odd, since while I had no interest in making a deal, the amount of effort they were putting forth made no sense to me. Luckily, after they realized I would not agree, get tricked, etc — they moved on.

Vandalism is wrong.

"Doxxing" public figures who try to anonymously control public discourse is not.


4chan seems to have a superior system wherein there are a small number of mods who take recommendations from a large, churning group of janitors. These janitors can only suggest bans and must follow a template for each of their ban recommendations. However, the janitors can delete posts/images at will (reversible by mods).

Thus, the system is both evenly-applied (most of the time) and highly scalable.


Hello fellow Oregonian, have you tried shadowbanning this/these disruptive individuals? How do you catch repeat offenders beginning a VPN? Browser fingerprinting? (Anyone else know?)

Hello fellow Oregonian, have you tried shadowbanning this/these disruptive individuals? How do you catch repeat offenders beginning a VPN? Browser fingerprinting? (Anyone else know?)

Thanks for keeping r/Portland such a fun place to hang around in, really enjoy the creative community/people who post there!


> I had thought working as a moderator would be a great way to help grow the subreddit's base.

You became a moderator because you have an agenda to push. Nobody wastes that much time modding a sub without an agenda. And your fellow mods allowed you to be mod because you share their ideology. This is the case for most subs and it's why subs tend towards ideological/propaganda shitholes.

> In this example, the person will not stop and creates new accounts every day.

You could always turn the sub private. But you won't because you want to use the sub to push an agenda. Right?

> While there are things Reddit can be doing to help, (such as improving tools to counter ban-evasion,) I think this problem is bigger than Reddit

Actually what reddit should be doing is declawing mods like you and going back to being a user centered platform. Like it used to be when reddit was fun and interesting.

> and focuses on lack of enforcement for digital actions that would qualify as genuine crimes of harassment if translated into the physical realm.

If there are genuine crimes in your sub, then call the cops. Simple.

For every shitty user, there is a shitty mod. As a matter of fact, it is shitty users that tend to become mods. You have a really easy solution to all your problems - privatize the sub. Make it invite only. For every complaint you listed, there is a solution provided by reddit - private subs.


Shut your subreddit down and detail your actions. Use a tool that doesn't put your moderators in danger.

A subreddit isn't worth your safety.

There are online places where they use a third-party to verify identity. It may not have the most adoption, but at least you're not putting yourselves at risk.


I would take an aggressive stance againsnt these assholes that are vandalizing someone's property. It's petty, immature, and crosses a line. Physical mentorship is the only way to get through to pieces of shit like that.

There are subs dedicated to calling out his bs. One sub has figured out how he games reddit in his favor for karma and front page placement.

Yeah, I can't think of any one single action that would make Reddit a better site overall than him leaving it.

I understand why mods of smaller subreddits do it. They are passionate people looking for a community. The volunteer efforts they put into it help them and the community.

I do not understand why mods work for free on the huge subreddits. Abuse, spam and more would make the position horrible.

Reddit needs to start paying folks to do this job.


People love the idea of decentralized control, but that is not how people willing to make the effort are allocated. We see this on Reddit, on Facebook, and we see this in open source.

On Reddit (I am not a mod but friends are) mods burn out quickly. Many people have never seen the horrible behavior that goes on in the background.

Same thing on Facebook. I moderate a small 800 person group, one which is private and has mostly pre-screened people. We burned through so many mods over the years and the worst things we encounter are insult wars as nobody is going to post porn or graphic violence with their real life account.

In open source, the heavy lifting on a lot of projects is done by just 1-2 people. Sure the project might have 30 contributors, but it is those 1 or 2 who dedicate their evening when a major bug is found to fixing it.

In all these cases, the work is mostly done by whoever is willing to do it. Over time, people who find the work stressful quit and those who don't end up taking on an enormous amount.


The issue is that we know reddit moderators play politics. Subs are taken over or subverted. Having control over a sub can give you an opportunity to control what people see and read. If you then find out that a small group of moderators moderates a large number of the big subs then it makes the decentralized control into centralized control.

The way I see it is someone who has enjoyed a "privileged" position without transparency or accountability for a long time is surprised that people find issues with concentration of moderation powers (especially across subreddits).

Makes me wonder what is it that people find so appealing in having moderation of subreddits as an almost full time job.


Concentration of influence in the media is a real issue, yes. However, doxing him isn't a reasonable or helpful response to those concerns.

Completely agree.

And while there are advantages to those positions being "anonymous" the impact of an eventual doxxing is heightened by it.

Maybe an initial greater transparency would have been better in the long run. Or something like rotation of positions, etc.


I think you're underselling the other side of the problem. The guy had real, significant power, overseeing many of the largest discussion forums on one of the most popular websites in the country. It's unreasonable to expect anonymity when you wield that much power to control the national discourse.

You answered your own question already: power.

It's almost like they enjoy doing it for free.

I stopped being an active user of Reddit about 5 years ago.

I felt that it wasn't a forum that was conductive to interesting discussion.

But also, I know it's a cliche to talk about the eternal September effect, but somehow it feels like the discourse in the site is even more stupid today.

At least to me it seems like it's just useful as a problem and answer site you Google on.

But now it seems like to some of users they feel like the website is the definer and trendsetter of western culture.

Instead of a website where people post memes and shit post.

Like what am I reading? That there's a shadowy cabal of moderators?

Who cares if they're running the big subreddits?

Even who cares if they're being paid by companies to post things.

Reddit is website that does little moderation themselves (except when it gets them in the news) and rely on volunteers to actually manage the site.

What do expect? That people are going to spend 10s or hours a week for the joy of community? So they can make Reddit's employees and shareholders money?

When it comes to Facebook people say "if you're not paying for it, you're the product"

But it's the same for Reddit. But this time it's the users which are taking advantage?

Oh no.

But perhaps I don't understand the seriousness of Reddit.


I think most people take issue with the fact that reddit, one of the most trafficked websites in the world, has some of their biggest subs controlled by a few people. You can easily use this power to shape/influence public opinion of a large number of users.

More worrying to me is the great effects that botnets can have on reddit post. If you can get a post off and running early, it will appear on hot and then get to the front page, and grow exponentially. I mean, the internet research agency is no big secret and this sort of work is their bread and butter 1.

1. https://en.wikipedia.org/wiki/Internet_Research_Agency


Reddit's top user in the sense that he reposts to farm karma and moderates over 1000 subreddits. He is an opportunist seeking power. Generally, he is disliked by a large portion of reddit. Spamming reddit at that scale to become self appointed "top user" has a major impact on the site, and it's for the worse when it dilutes the content on a site. Unsurprisingly, there are several accusations of him selling his influence to other companies.

> Generally, he is disliked by a large portion of reddit.

Huh? 98.1% of users don't post [0]. The user mentioned in the article has the most points of anyone, meaning their posts have the most votes. That doesn't indicate dislike. Quite the opposite, actually.

Was there a poll about this user to gauge sentiment?

[0]https://www.reddit.com/r/dataisbeautiful/comments/b5f9wi/let...


That's an incredibly naive perspective on media to take.

If national news stations get high ratings does that mean people necessarily like the people generating or publishing the content? Is it impossible for a medium to generate interest and attention while actively attempting to exploit or manipulate its viewers?

Most users do not notice the poster at all when upvoting a thread. Notice the top level thread [0] here talking about Reddit Enhancement Suite and the epiphany that comes with monitoring which accounts one is passively upvoting.

[0]: https://news.ycombinator.com/reply?id=23258721&goto=item%3Fi...


>If national news stations get high ratings does that mean people necessarily like the people generating or publishing the content?

I think national news stations are not an apt comparison, since they have faces. This user, for all intents and purposes, exists only in text and non-personal image posts. For that reason -- the lack of "personal connection" that voice and face provide -- upvotes are reasonable proxy for "like" and "approval".

I contend that people don't "dislike" this user specifically, but actually "dislike" the idea that the content they are consuming comes not from many original minds, but a few minds copying the same content over and over. As I pointed out in my original comment, this is the entire platform! What people dislike is not the user, but the entire system.

In that regard, it is possible for a medium to generate interest and attention while actively attempting to exploit or manipulate its viewers, because that's what Reddit does. It's a classic attention merchant.


My interpretation of that posters argument is that someone who posts as much as this reddit user can’t be putting much effort into finding these links. That means there’s more lowest-common-denominator type content that people look at very quickly, upvote, and move on. That type of thing pushes out interesting discussion style posts, which is something that happens in every subreddit that gets large enough. Not sure what the solution is.

His point about “most” users disliking it is questionable, and might be misconstruing the thoughts of a vocal minority, but I think there’s some merit to an anti-hedonistic mindset when it comes to reddit style content posting.


I was once banned from a travel sub for suggesting to a 19 year old woman asking about safety in Egypt that it might not be the best place to travel by herself. I had been there myself and know women who have traveled Egypt on their own who also recommended against it.

My comment was seized upon by mods of the sub as:

1: Mansplaining (She didn't specifically ask for male opinions so I should shut up).

2: Racist Hate Speech (Because saying Egypt might not be perfectly safe - something I would also say about many cities in the USA - is somehow racist to Egyptians).

3: Sexist (Women can do anything that men can do and I was apparently suggesting otherwise).

I was summarily banned from that sub plus a bunch of other travel subs by the same few mods. I don't know what the answer is for moderation on Reddit but whatever they are doing right now is definitely not working.


The issue he was "outed" over (concerning alleged control of a significant portion of reddit by a small number of users) was discussed here previously:

https://news.ycombinator.com/item?id=23173018


This is your signal to build the thing you were thinking about.

So much money on the table for anyone that can reinvent, but allow easy migration from that once great site.


If someone's going after this, my killer feature would be user-driven moderation/filtering. That is, what I see should be determined by me, not some moderator or the hivemind. Almost certainly this would need an AI element to learn what I want to see. Potentially there could be a "people like you" assist to that.

Reddit just hasn't scaled well, and it looks like the operators of Reddit have no interest in scaling it properly.

Case in point: the moderation system. Now, if you're passionate about some niche topic, and create a sub for it and become the moderator, then fine; you rock. Things work well for you and your subscribers.

But if you're a sub of a large interest area (say, the country of India). How do you moderate there? How do you accept the diversity of thoughts and opinions, many of which run contrary to your own, but are perfectly valid in a diverse society?

Currently, /r/india has a bunch of moderators with a severe political stance (I won't go into specifics here, but I've been banned from it several times). They will warn you and ultimately ban you if you don't toe their "moderation" line. So on the one hand you have the "in" group of people threatening violence and harassing people; and on the other, you have the complement set of people getting banned for using words like "naive" (happened to me). There current moderation system (whoever started the reddit, and whoever they blessed, get to be moderators; sortof like a monarchy and a feudal system) is horribly out of date in today's world.

Here's how I would fix it: every year, have an election of moderators. People get votes proportional to their karma (or upvotes or some function thereof, which could heavily penalize bad posting behavior) earned since the last election. And the top N vote getters get to be moderators for a year.


> Reddit just hasn't scaled well, and it looks like the operators of Reddit have no interest in scaling it properly.

Has anyone scaled online moderation well? Facebook, Twitter, Stack Overflow, etc. all struggle with moderating their communities.


People blame FB for social media woes, but I feel that FB is different from the other social networks in that it’s the world you fully curate and manage. If FB is giving you grief, that’s people on your green list giving you grief. If you need FB to come in to manage that, then you’re basically saying you can’t even handle the agency involved in freedom of association.

I half agree with you. If the "grief" that Facebook is giving you is from you not liking the posts you're seeing, then yeah, unfriend some people and move on.

Censoring private chats (among plenty of other sketch practices) is something entirely different though. I never foresee myself wanting to send "joebiden.info" to a friend on Messenger, but the fact that Facebook has decided I'm not allowed to seems problematic.


This was my thought as well. There is no efficient AI like modding, even Google/YT can't get it to work well enough to leave alone, at least not yet. A responsible human tasking with modding can only scale so far before more humans must be found, and the more you need the lower quality the you are likely to drop to based on resources.

When your site or system generates lifetimes of content each day the scale becomes insane.


Any community with unvetted and anonymous users end up being gamed and filled with various bad actors. I have always thought that a community where people were vetted through multiple forms of verification (e.g. government ID, drivers license, etc) and posted using their real names would instantly improve quality of discourse to some degree. In this way, people would self moderate through shame and real-life reputation. Unfortunately, with this type of system, you wouldn't get a critical mass of users since most people don't want to associate their controversial views/comments with their real identity. On the other hand, the discussions that do take place might be more valuable since they could be trusted and you could get some degree of guarantee that there wouldn't be astroturfing or bot activity.

Hacker News?

If anyone wants an example of the kind of moderation that goes in r/India, any comment just referencing the moderators is removed automatically by the automod. Any criticism of the subreddit is not tolerated at all. (This was told to me by a moderator of the sub, as I know them IRL)

That's the irony: the mods in /r/india decry Modi, but follow the same tactics that they accuse him of in their own moderation!

The naming system in reddit is very misleading. For example /r/india is as much related to India as /r/politics to US politics. The toxicity is so much that they can give 4chan a run for their money.

It's funny that 4chan has fallen victim to bad growth as well. The gamergate controversies and the 2016 election really brought out the worst of the people there. I remember years ago it wasn't nearly as toxic and you could sometimes have a good time there. Some boards are trying to cling on to those times

>There current moderation system (whoever started the reddit, and whoever they blessed, get to be moderators; sortof like a monarchy and a feudal system) is horribly out of date in today's world.

This is the worst part for me, though truly unfixable in my opinion. Reddit has taken over forums for many topics, but the entire mod team is chosen by the person who thought up the name first in 2011, which hardly has the same barrier of entry as running and hosting a forum (generally you can assume someone who goes to that much effort is invested in the hobby).

>Here's how I would fix it: every year, have an election of moderators. People get votes proportional to their karma (or upvotes or some function thereof, which could heavily penalize bad posting behavior) earned since the last election. And the top N vote getters get to be moderators for a year.

We both know this wouldn't fix the /r/india issue, they would simply vote for the mods that lean the way they agree with, and since the current mod team has pushed out those who don't, it'll likely remain that way. And I also think this could even hurt many reddits, as the lowest-common-user would likely vote to allow things they really shouldn't such as meme posts (many a sub has banned this, though generally at the dismay of many of the community).


This article doesn't actually address any of the questions (eg how does he have all this time?). Note that it's on the public record that Gallowboob has worked for social media marketing companies, eg https://www.forbes.com/sites/fernandoalfonso/2016/06/01/cash...

> The Upvoted story never materialized; Reddit administrators were still digging out from under a deluge of drama including revelations about online harassment from its former CEO Yishan Wong and the sudden departure of current CEO Ellen Pao. With no job, Allam spent the next six months crafting the following cover letter and applying to jobs. In February, Ohanian’s advice paid off. Allam, better known on Reddit as gallowboob, had landed a full-time executive gig with the United Kingdom-based media company UNILAD. He got the job thanks to his Reddit prowess. > > ...Allam is a social media executive at UNILAD, an online media organization that “amassed a Facebook following of over 12 million fans, 30 million monthly unique visitors to the website and over 1 billion video views a month,” states the company on its site.


The article is also borderline misleading, as initially I thought their real names had been posted. All that was posted was a list of which users are common moderators of large subreddits. If they are getting abuse to their accounts, the problem is the abuse, not the list.

Agree - "Outed" implies he was doxxed, but per link in the article his name was known in 2016

From what I read, the guy that posted the list then got banned from plenty of those large subreddits. That probably fuelled the rage.

It seems the one of the fundamental problems leading to this situation is that reddit moderation is unpaid yet requires large amounts of time. So anyone who becomes a 'big' moderator has to monetize their efforts externally, leading to communities being suspicious of the motives of any big moderator.

im not sure "how does he have all this time" is relevant? this is a harrassment story, Gallowboob gave in under enormous pressure. It is a sad day for Reddit and I am bearish on them while I don't see any strong ideas for how they fix it.

> because I don’t know if they’re trying, you know, [to] get my IP.

Use a VPN, double-hop if you really care.


Many people see this as moderators controlling too much, but it's what happens in a mostly free system. Some people will over-achieve and 99% will underachieve or achieve just enough to get by.

This has been seen time and time again in any system involving humans.

A big issue with Reddit is that it has subs dedicated to encouraging mentally ill and anti-social behavior.

These things tend to eventually escalate into the real-world where someone eventually gets hurt or killed.


Can you provide some examples of subreddits that encourage such anti-social/mentally ill behavior?

"Five people moderate the 92 largest subreddits."

If that's true, why? How could an individual possibly have enough time and attention to moderate more than one subreddit with millions of users?


NodeRed or Apache NiFi can help a great deal with that.

Both are great at posting and reposting (on and off Reddit, or between FB, Reddit, Twitter, etc), along with controlling botfarms.

edit: Seriously, why the downvotes? We had a NiFi post a week ago. And those of us who do OSINT and investigative work also use deanonymizing techniques. We automate our defenses as well using similar.


Probably because it kind of ignored my point. For example, r/pics has thirty-six moderators. Some of them are undoubtedly inactive, but it's still obvious that moderating a single massive subreddit is an order of magnitude more work than can be done by one person and therefore giving one person moderator rights over multiple does not increase the amount or quality of moderation. One person cannot use that power to do more good for the site, so the question is what are they using that power for and why do they deserve it?

I sait on another comment on this article that being a mod allows you to strongly astroturf and control the narrative. From there, and having loads of bots, allows you to manufacture consent or dissent. And the moderation power allows you to remove what you wish not to address.

It's sheer power. It's not about the money, per se... But those with power get money, and those with money seek power.

It has nothing to do with good, in most cases.

(And yes, I'm a moderator of small groups. I just remove spam and malware.)


It is much worse than that. Some of these mods are on the moderator list for 1000+ subreddits, many with 100,000s or millions of users.

They obviously don't have time to moderate all of these subs.


Moderating a medium to large sub is a thankless task.

I moderated a sub with 100,000 subscribers for a couple of years.

Most of the work was removing obviously bad content. There was a user who would make a new account every few weeks, post various comments to make it legit, and then start posting legitimate comments with links to hardcore pornography disguised as normal links. Users once rebelled because I removed 100% illegal content.

Users and mods on the subreddit started to get hit with SLAPP lawsuits at one point. I was always very careful to hide my real identity but that was the final straw. I stepped down from modding shortly after.

Modding is a near impossible task and as a subreddit gets bigger it gets exponentially harder.


It seems to me like the main purpose of moderation is to make sure posts conform to the theme of the subreddit... to keep things on topic. Almost everything else can be taken care of by the downvotes of the community members.

A subreddit is like a community where every citizen has a gun. They can police themselves by downvoting each other... the only thing mods need to do is make sure the theme of the community stays on track.

I'm the sole mod of a subreddit with over 600k subscribers and I basically do nothing. My case is admittedly special because the subreddit is a weird kind of "game" with very specific, well-defined rules about how to reply to posts (and a bot that deletes replies that violate this rule), but in general I feel like people don't have enough faith in the ability of a community to handle itself. Occasionally I've had to ban people who are spamming /new with disturbing content, and I did end up setting some useful automoderator rules, but otherwise I find that the community runs itself.

I've gotten a few messages from people over the years saying they're unsubscribing because they wanted me to take a more involved role in moderating the content posted to the subreddit, removing posts they thought were "low quality". That's fair, but I don't think it should be up to me what counts as "low quality"... I'm just one person. The beauty of a community is watching it develop organically from the actions of thousands of people, not molding it into some shape you happen to prefer. It's like an organism, and it has a mechanism to achieve homeostasis: upvotes and downvotes.


AskOuija? I appreciate your take on moderation. I think what happened on WorldPolitics with IAmanAnonymousCoward shows that at least a small bit of work is needed to keep things on topic, but overall I think free speech and self-moderation is better.

What seems unacceptable for moderators to do, which seems to happen a lot, is banning users from multiple subreddits due to comments or behavior in a different subreddit. That's really a power trip and it sounds like something that a lot of moderators on 'the list' have done.


This was my experience on a couple of subreddits which were a bit smaller than yours. One of the subreddits even covered a topic that often gets a lot of drama and high-strung people. But outside of the occasional obvious troll, the communities managed themselves. “Low-quality” posts like memes were effectively disallowed by the community itself, i.e. they would always be downvoted - but if they became popular, so be it. We never got abusive modmail, no one was ever mad at us. If two users started arguing, we’d stay out unless there were actual threats or slurs, which only happened a few times.

I do think the situation is different for subreddits that aren’t really “communities” such as /r/pics or a meme subreddit, where most traffic comes from the front page. But the same general philosophy probably still applies to some extent.


Is he done "after being outed", or done after finding a comfy job in social media management?

While this whole situation is a mess and nowhere near black and white I'm sure, painting Allam in particular as a victim is leaving out a lot of detail.

He is not only known for being a reddit power user, he's also infamous for abusing his mod powers, and being provocative and elitist towards people calling him out, mocking, trolling or banning them. He took over and wiped out a sub that tracked his abuse of mod powers. On the big subs he's controlling people got banned frequently for calling him out, long before this more recent drama. These whole theatrics have been going on for years. I've been targeted by him in the past, sadly I can't dig it up because reddit's comment history only goes back so far.


I've been a member of centuryclub for a long time, which is for Reddit users with over 100k karma. Gallowboob has been posting photos of his ass on that sub for years and was open about the city he lived in and what he did for work. Its hard to feel bad for him here; he should have known what he was getting himself into.

Yup. A long time ago I was mouthing off in a sub about gallowboob. He threw a shit fit and had the admins ban me.

The whole site is rotten, and completely opaque. They say one thing, and do another. Fresh blood is something reddit sorely needs.


Additionally, much of that "precious" karma he has is from gaming reddit. He will (re)post something and if it doesn't gain traction in a short amount of time, he'll delete it and repost it. He does this over and over until the post takes off. In the past, it would be fun to watch his profile and see posts jump from "posted 30 min ago" to "posted just now" over and over again.

Case in point. I was digging around a bit after my initial comment and found this post[1]

It's crazy how little I found about well known stuff. When trolling and provoking he also deletes his own comments once they get downvoted too much, and I could hardly find anything that documents it, even though it's far from a secret

[1] https://www.reddit.com/r/Against_Astroturfing/comments/ans1h...


The 'spam filter' enabled on most large subreddits greatly adds to the 'echo chamber' nature of Reddit and leads to a tyrannical majority. If you get downvotes in a major sub due to not agreeing with the prevailing opinion, you are rate-limited and you can only post once every 10 minutes.

Imagine being in a debate with dozens of people and being rate-limited to only posting every 10 minutes because your opinion wasn't agreed to by people doing voting.


Jim Leff, who founded and ran Chowhound long ago, has a lot to say about the perils of moderating an online community:

https://jimleff.blogspot.com/2008/08/always-talk-to-mask.htm...


GallowBoob is a scourge, I'm sad to see a human being harassed and I hope he'll see justice, but he's also all it's wrong with reddit moderation and two wrongs don't cancel out to make one a saint.

I encourage you to dig into the threads of the alleged post that made him 'flip' https://www.reddit.com/r/therewasanattempt/comments/gk0w4c/t...

some are straight up harassment, which is sad. some describe however all the many things he did to ruin other people enjoyment to the site, most common of which was banning users to steal good content and repost it for karma, to fuel his own agenda, which include for example being paid from netflix to run advertisements as normal posts trough the subs he mods.

he's not the good, loving moderator that the article describes.


Reddit has turned into a cesspool. I’m sure the majority of users are great, honest and open minded, but you can’t keep the internet outside the walled garden forever. And it only takes a handful of bad actors to ruin it for everyone.

As frustrated as I’ve found myself on Reddit, to the point I’ve all but quit using it save for some video game stuff, I still can’t imagine how mods can be bothered to put up with bad actors, trolls, whiners, and fools. Everyone checks out at some point —- and as a normal user, I can and I feel no compunction about letting the conversation devolve, but when you rely on volunteers to do the policing, it’s going to happen more often and with harsh consequences. And you can see on many subs that absent strong moderation, discussion on anything remotely contentious devolves into an ugly free for all. Even if the thread is locked or certain comments get deleted, it’s too little too late, and the damage has been done to civil discourse.

This article has me thinking more broadly about the internet and the future of online communities. Large platforms can’t thoroughly patrol every nook and cranny without unreasonable amounts of employees. And yet it’s never been more important to have fair and timely moderation on serious issues. Machine learning might have a role to play here, but it’ll require some sort of human oversight to provide a “starter” or some context within which a system can determine what can be censored. What else is left? Unmasking everyone isn’t going to happen for good reason. The internet is in dire need of a police force, and volunteerism isn’t sufficient.


I feel sad seeing this gaining traction, when Reddit's appalling history of enabling and supporting abuse has gone unchecked for so long. Why care about it now? You didn't care before. This is a real question. Why is now the time you choose to engage?

I was a moderator on /r/blackpeopletwitter and the amount of racist and outright vitriol we received was fucking insane. There are people out there who really have it out for BPT and we banned dozens of users daily who were legit there to only drop the n word with a hard r and make jokes about monkeys or something.

I deleted my reddit account in a panic after someone sent me a DM saying they knew where I lived and I saw a guy wondering around my apartment complex with his phone out like he was recording. I never want to be highly visible on reddit again.


Wow... this and the stabbing story are making me happy that I only mod relatively tame subreddits.

BPT twitter also did that "April Fools joke" where only verified black people were allowed to post there, so it's not a really a suprise that an overtly racist subreddit attracts racists.

I feel the safest way to use Reddit is to create a new account for every subreddit/community/subculture that is in any way tied to real life, and absolutely minimize the amount of information you give out in each one by keeping it relevant to that particular topic.

If you have some niche gaming/entertainment/etc interests, use an account specifically for those without ever delving into personal details.

Have a separate account for BPT, where again, you minimize sharing personal details.

If you have a hobby that has more real life interaction (for instance is location based, or is small and you use personally identifiable information), use a separate account and minimize sharing anything not related to the topics.

Obviously if reddit gets hacked/info is leaked good luck, but controlling available information about yourself seems increasingly mandatory to protect yourself from the crazies if you want to participate in online discussions.


My solution was just to not use reddit interactively anymore. I used the site for about 10 years before I got tired of wasting afternoons browsing or arguing past other users. Stuff I want to discuss usually gets linked in a discord channel and I talk about it with my friends.

When I browse now I just skim a few subreddits I like and refrain from interacting.


I think the biggest problem with reddit is that the simplest subreddit name is going to attract the largest audience, and if you don't like the moderators or community there, then there's nothing you can do about it. There's only one /r/politics, and if you don't like it then you'll have to move to, say, /r/notpolitics, which is going to have less visibility and a smaller community simply because of its name. What would help would be, for lack of a better term, "namespaced" subreddits. There wouldn't be a /r/politics, there'd be a /r/namespace1/politics and a /r/namespace2/politics, and there'd effectively be no problem with visibility: if someone's looking for a subreddit to discuss politics, then there isn't a default, and you'll have to explicitly choose which community and moderation style you want. The best way to implement this would be a federated reddit, with each server being it's own namespace. If you don't like federated-reddit1.com/r/politics, then move to federated-reddit2.com/r/politics. There'd be a problem with bubbles, but that wouldn't be any worse than what already exists.

GallowBoob tells a sad story, but the fact remains that they and a few others had far too much power. They were arguably also spread far too thin. So altogether, I'm not sad to see them go.

Also, their behavior deleting posts and banning users over discussion of this issue has been egregious.


Big/default subs are fast becoming outright mainstream trash on reddit. Fortunately like small businesses in the economy the smaller subs are holding the whole show together.

This is a positive change -- that one user account had too much power over too many subs. I refuse to believe money wasn't becoming involved.

Size breeds skepticism and the GallowBoob account (whomever was using it) just got too big.


Good riddance, IMO

There is definitely an anomalous uptick with reddit harassment these days. I posted a positive article to /r/Coronavirus and I was immediately harassed in the dumbest way possible (someone trying to counter an article I posted but their argument was actually restating the article thesis). He ended up getting a ton of traction because he probably sounded confident. It left me confused and saddened- I owe a lot of what I learned in the last few years through great discussions on reddit and hn, hopefully reddit's deterioration by bored/malicious actors is stopped.

Well the coronavirus really does bring out the worse part of Reddit. If your article at all suggested that cases are decreasing/it's not going to lead to the end of the world, that sub is going to attack you.

Just five months ago users were getting permanently suspended from Reddit for mildly criticizing GallowBoob.

You can see it here: https://old.reddit.com/r/AgainstKarmaWhores/comments/eb146l/...

I doubt that he would walk away from the website even though a number of users would be happier.

There are hundreds of fantastic moderators on Reddit that genuinely help people and there are also a few people who want to feel powerful by manipulating others.


He's probably just going to make a new account.

Oh good. Blocking him made reddit much more enjoyable.

'top user' - who gives a fuck?

Kind of like me claiming i'm the 'top user' in GTA V?


I'm sure there's an article in there somewhere: https://i.imgur.com/GqQCnVG.jpg

For some reason it even crashed my mobile browser.

As I was scrolling, more ads loaded and kept moving the text around.


Moderation is akin to public service. Taking on the duty means putting yourself in a place of visibility and accountability. It is no surprise that the pressure is too much for some.

Oddly enough there is nothing to be seen about this on the reddit frontpage.

Thank god he left. He’s a pathetic excuse for a human being.

That being said, a great number of the active mods on that platform seems to be a couple steps away from becoming power tripping maniacs. It’s one thing to stop spam and illegal content. It’s another to censor discussions because whatever content was posted can be interpreted as against rule 4.1B §2. How hard is it to leave the content up and let users downvote it if it’s not so relevant?


The worst spammer on reddit leaves the site? Good riddance. Though I'm sure he has a few spare accounts to post his stuff.

I'm sorry but I don't know what to make of this. Of course harassment is terrible, but what the hell is going on?

Last week a reddit thread discussing this list was linked on HN, someone linked to a site where you could see deleted comments. There were a lot. But it was mainly people asking very reasonable questions and wondering out loud whether these "power mods" got paid, perhaps by advertisers or influencer agencies, because doing this kind of moderation is clearly way more than a full time job. Other deleted comments were wondering if those accounts were perhaps shared (again super sensible question given the amount of time it would cost). At the very worst some of the deletions devolved into wild speculations (as reddit is wont to).

Point is, obviously they do get protection from the site. And apparently sometimes unreasonably so.

Some stinky stuff is going on below the surface, is my feeling. Doesn't mean that person gets to be harassed, of course.

Also, I don't really buy the mental health comparison to FB moderators. The latter is an absolute shit job, and it's done by anonymous teams of paid people. That doesn't really rhyme with being the most upvoted Reddit celebrity of all time, with their seemingly superhuman sense to post the juiciest meme at the most opportune time.


Legal | privacy