I'm sure what they have in mind is pretty well-considered, however, the pitfalls I outlined above still remain. For instance, different people have different ideas of what a `reasonable person' is. Generally, one which is pretty close to themselves! Also, ``systemic and/or continued'' also means different things to different people. For instance, the opponents of Gamer-Gate claimed (and perhaps genuinely believed) themselves to be targets of a `harassment campaign', when in fact, they were receiving disagreement and mockery from many different sources, each only communicating once or a handful of times. [1]
So, that wording of the policy still leaves plenty of leeway for the sort of abuse I described above.
It doesn't have to apply everywhere but it's still a good policy in a lot of contexts. I think a massive general audience platform is a good example. If this were, let's say, an online community of survivors of abuse, maybe that sort of prudence could reasonably take a back seat.
Interesting. Do you have experience moderating a comparable community, or what leads you to believe that it's not as much of a problem as they make it out to be?
I'm sure it will always be possible to find a way to harass someone. But do you believe putting barriers in place has no effect on the amount of harassment that targets have to endure? What do you think of the computer security concept of "defense in depth"?
I think this sort of thinking is far too fatalistic and basically throws people who cannot deal with abuse and hate under the bus to boot.
The platforms hate and abuse are delivered on are currently under human control. The idea we have no control over whether or not we receive hate and abuse is simply not true. The solution, moderation, isn't even new and is a core feature of all polite online spaces. HN is a good example.
This is kind of what I mean. This seems like shifting goal posts to me. They had the whole harassment patrol thing on Twitter which I found sort of silly, personally, but I don't know how that would be described other than an effort of some sort. I certainly wouldn't call it an abject refusal to even try.
Like I said, I think it's a larger problem of shitty behavior on the internet in general, and I don't think anyone has a real solution at this point. I find it unfortunate that people use that behavior as an excuse to talk past each other.
The one thing I have always found is that usually, when there is an actual example of one individual who is directly hurt by someone else's prejudices, there aren't two sides to the issue: it's pretty clear to neutral observers that the prejudiced person is in the wrong.
But in almost every case of people being chased up online for this sort of un-PC behavior, there are no examples of individual people who claim to have been directly hurt by them. There's just the group saying that they're "problematic" in some vague sense.
If we could de-legitimize any conversation about injustices that isn't couched in concrete terms of pain caused to non-hypothetical people by non-hypothetical actions, I think there wouldn't be any "tempest" left.
I never claimed that the ratios are the same. I even pointed out in my op that they will be different for different groups.
I'm claiming it doesn't matter - if you put yourself out there, you're opening yourself up to be attacked. And the only way to avoid that is to be anonymous, and let your opinions and arguments stand on their own merits.
Anything else has been historically shown to fail. Blizzard's real name policy? Failed to stop the trolls. Community Guidelines? Trolls don't care. A sternly worded letter? Just lets the trolls revel in their success.
"If someone is behaving badly in a company or even in a casual group, you don't just ignore them."
I think I agree, but it goes against the grain of what people previously thought you should do on the internet.
>harassment campaigns or spreading misinformation
I am genuinely curious about the flora of the NPC movement. I mean, this territory between meme and organized campaign is really interesting and it's sad it comes at the cost of society.
You might want to read what I actually said instead.
I know abuse happens. In fact, it happens on all sides, but gets downplayed if it happens to politically unpopular targets. Even some of the people who've profiled themselves as anti abuse specialists are more than happy to direct their own followers to targets, share their phone numbers, address, and so on. Even without that hypocrisy, the line between legitimate use and harassment is fuzzy.
Just recently for instance, there was that BuzzFeed article about Milo Yiannopoulos, and you could see the morality police mobbing and hounding anyone who was remotely in contact with the guy as being evil nazis. The same people who cry for Twitter to stop abuse. It's only bad when the out group does it, when the in group does it you're just sharing important information and leveraging the network effects of digital media. Guilty until proven super guilty.
There was also the famous case of Steph Guthrie and Gregory Allen Elliot in Canada where simply denouncing a feminist witch hunt was enough to get the guy a long and arduous harassment case he eventually won,
where his accuser claimed with a serious face he didn't have the right to defend himself against her. And he wasn't even the guy she was originally after.
Closer to home there was recently drama in the node community where, after failing to vote out a community leader for having shared the wrong opinion on codes of conduct, the persecutors themselves were found to be harassing people and advocating violence shamelessly... Which suddenly was not grounds for a reprimand.
To my knowledge, nobody is receiving hundreds of credible death threats. Rather, offhand comments like "I hope you die" or offensive memes are deliberately being spun into a moral panic because it's useful as tribal ammo. When the network effects and witch hunts go the other way, it's waved away.
The truth is somewhere in the middle, and not everything is what it seems. But I know I'm not going to get an objective assessment from the people who think it's ok to fire others for disagreement, especially not when they don't just summon social media followers, but instead recruit a cabal of journalists with much larger platforms to conduct their smears.
PS: Milo is an edgelord try hard, I don't like him. But all he is is the Shanley Kane of the right, with more talent.
We agree on the principle. I’m just pointing out that much smarter people than you and me have tried and failed to solve this for decades.
My best understanding is; this is human nature, and cannot be solved, only mitigated. Smaller scale and niche games do this with human intervention (server admins) and if the community is small enough, shame can work.
At a certain scale, it breaks apart. Toxicity unpunished leads us to the “toxic gaming culture” we have today.
Like you, I am horrified of the surveillance society being built around us. I genuinely wish there were a better way. Your OP gave me the impression that maybe you had an idea for solution, hence my follow-ups.
Barring that, in the context of gaming — where kids and teens spent a significant part of their lives these days — I’d rather see a non-optimal, opt-out solution that removes the toxic elements, so we don’t grow yet another generation of casual racists and sexists.
This article sounds woefully out of touch with the realities of the modern web.
So many of the arguments surrounding Damore talk as if this is in a vacuum, when frankly, it isn't. Much of the message it carried(and did little to discourage) came off as insulting to a group of people who commonly spend every day harassed just for what they are, anytime they have gone online.
The article talks a lot about the fear of an online mob shaming people for doing something, yet completely ignores that many of those in said mob are harassed daily based on their mere existence, and often their "shaming" comes from a place of just not wanting to be hurt again.
Is it necessarily the most effective means of accomplishing this? Certainly not, and I'm not about to claim to support what it becomes, but to call question to that and speak from such a "Nice problem to have" place to people suffering a much worse version of what you're talking about reeks of "Let them eat cake".
We've just been through it. Yes it exists and yes it is.
What's more, you and are the ones derailing MY point about harassment online. So if you care about derailing a conversation I started with Jerf, please don't simply restate what we've already been over.
I understand and I agree with you for the most part. Most of what we're discussing exists in a grey area to me, one that can only be clarified if we differed to real-world norms at the expense of forfeiting some of our natural inclinations in a digital space. As much as it may be in a person's best interest to provide an explanation for their exit and expunge from an online community, I'd offer a similar argument, that it would be in the best interest for certain people to differ the responsibility of checking in on the person's well-being to those who are known to have an actual relationship with them (i.e. one that isn't predicated by the environment that the person is tearing themselves away from in drama-inducing fashion). I think that this where scale becomes an issue, like you said.
> Your last point would hold if we only had the mental capacity to pity one person at a time, in which case would Rebecca Black really win over, say, victims of child sex trafficking?
My point is that the irrationality and hostility of the Internet and its users is by no means a new thing, and - as much as I sympathize for Ms. Wu - nobody is exempt from online dickery.
> Are these obscenities received every waking moment, or only during the game?
Depends on whether or not you choose to connect a real-world identity to your in-game identity. This is often the case for, say, screencasters, "Let's Play"ers, etc.
> Do they know your real name, your home phone number, your address, or just your avatar name?
Depends on how motivated they are, and - again - whether or not you've made any connection between the real and unreal worlds for these sorts of trolls to latch onto.
> Are they threatening your online personality, or threatening you?
The latter in many (if not most) cases. That's how it works in a lot of these situations; the other players are real people, and therefore not subject to Fourth-Wall-induced restrictions.
> Please don't trivialize these threats by comparing them to in-game banter.
You'd be surprised how frequently "in-game banter" is the understatement of the century.
Really, given the tenor of some of the online communities i've been involved in, what passes for unacceptable behavior here is still practically civil. I understand the desire to 'engineer' the problem away but I don't think that's really possible, apart from the community itself being more tolerant and the moderators being more strict but fair. It is a public forum, with a practically nonexistent bar for entry. And i've seen older, established posters act with vicious condescension and newer posters being polite and considerate. You have to accept a certain ground level of chaos, bigotry, backbiting, trolling and noise as part of the system.
It sure is, which I believe is why they decided to implement this policy. They realized even they can get dragged down into toxic behavior (name calling and accusing others of bad behavior publicly). Its better to eliminate the possibility altogether.
I'm struggling to get past OP's central conceit here: they want to share views that are generally considered unreasonable, but then are surprised when others in turn may behave in an unreasonable way towards them. It really seems like two peas in the same pod. (Worth noting we are getting the most positive read on OP's proposals via self-disclosure, while we assume the most negative responses via the same.)
It's also worth noting that the OP has not claimed they have suffered verbal abuse, harrassment, or threats. It's as likely that if they were speaking in person the typical response would be someone disengaging from the conversation with haste. I imagine many HN readers are very familiar with how the nature of online discourse fundamentally shifts the responses available.
>Trying to remove anonymity will not 'fix' trolling and harassment. After all, a lot of people simply don't care what others think of them, know they're too far away to be affected (in most cases) or just have nothing to lose to begin with.
While that is true, many times what person A considers to be harassment, the person that is harassing them does not consider it to be harassment at all.
This is not to say the person is not being harassed, although I have seen many examples of people claiming to have been "harassed" but when I view the twitter history for example I do not consider it to be harassment. That said often abusers do not self identify as abusers for example a Wife beater does not believe they are abusing their victim.
>>Facebook and Twitter struggle with this, partly because it's impractical to manually moderate a site with so many users and partly because their model doesn't really allow it.
Facebook and Twitter struggle with this because they refuse to create the clear rules for participation you stated other sites do. They proclaim to be "Free Speech" platforms welcome to "All Ideas and discussion" while creating mushy and lose rules that are highly subjective and makes enforcing those rules problematic.
Where as the sites you describe are normally subject matter sites so it is somewhat easier to define a more ridged rule set.
Facebook, twitter, and to a lesser extent Reddits problem is it impossible to be both a "Free Speech" platform and a censor... These are mutually exclusive concepts
I did read that in your prior comment [0]. And I fully agree.
But I will say, despite my agreement with it, that post of yours does not actually address this current subject of outrage. It is true that one component of harassment is the pervasive social attitudes held by the majority, and thus can be changed through social integration. But concentrated harassment also comes from a small minority that are either deliberately trolling, or so socially mistuned as to use a comments section as if it were a bar. All of the outreach and awareness is not going to change this latter group - think of Pascal's Wager for a date.
It's not that I don't "want" a solution to the latter. It's that I don't see how one, as envisioned, can exist. Education and awareness can shrink the quantity. Centralized sites with heavy moderation can shrink the quantity [1] [2]. Maybe even personal filtering-assistants could shrink the quantity. But at some level of fan-in it is still going to be present and require that people build their own mental defense.
[0] FWIW, I clicked the up arrow on that comment, and this one as well. But my votes don't actually count because it was determined that I upvote "low quality" comments - a year or two ago, anti-circlejerk comments were forced to really snipe.
[1] You have a post about rejecting offers for dates on HN. I don't believe I have ever seen such a proposal on HN (and I'm showdead=yes). I'm not refuting your experience, just contrasting - an event can be objectively extremely rare, yet still quite personal, intense, and resentment-forming.
[2] Furthermore, this ignores alternative avenues of contact - I don't think Medium passed along all this harassment itself. Which is why ambiguous "something" can't not imply some greater control over the wider Internet.
So, that wording of the policy still leaves plenty of leeway for the sort of abuse I described above.
[1] https://en.wikipedia.org/wiki/Out-group_homogeneity
reply