Twitter is built for harassing. The medium makes it inevitable. you make a tweet and you shoot it out into the ether. It ends up on a couple million twitter feeds whether they asked for it or not. It angers some of those people, and they retweet it to show their followers how stupid you are. Now all their like-minded friends see that tweet in their feeds. A bunch of them are now aware of your existence and are mad and tweet back at you. This is far worse when you participate in a contentious issue on a hashtag. This is before you even get into hate-following and active-deliberate harassers. Then there is scale. There are a couple hundred million people on Twitter. If even .01% of them are sociopaths, that is a near-unmanageable problem.
Finally, a significant portion of Twitter is quite literally trolls trolling trolls. They both can dish it out and take it and don't see the problem. Good for them, but it's not compatible with everybody else.
I think Reddit is really interesting in this respect - you have pro skub and anti skub subreddits sitting side by side. Yes, they get caught up in downvote brigading and whatnot, but battles aren't the status quo like on twitter.
On reddit, the subreddits you subscribe to are your choice, but if the groups of users on twitter are homogenous enough then I wonder if you could automate siloing of users. People with correlated behavior get moved into their own silos and then if you or other users in your silo block a person, they don't show up in your feed.
Who defines harassment? I've seen someone be banned from Twitter for criticising someone they've never even contacted. The Verge, in one article, along with actual threats, said a person who did not threaten but simply insulted another person was a harasser. People in arguments publish other people's phone numbers to a few thousand followers, they'll drop off and then come online again. AFAICT the argument goes that if you insult someone, it's 'calling them out' but if someone insults you it's 'harassment'.
I _can_ comment on all the other statements cited on that page. But others already did, so why should I repeat more of the same?
"This is [harassment]" meant all of it, incl. the one I referred to (and a bunch of others that may be angry disagreement, but are not violent, and not necessarily harassment). Comes to show that the definition of harassment isn't as easy as "here's a list".
Since you asked so nicely, to give you a comment on all those other posts, let me stick to those who are unquestionably harassment - those containing threats of violence and death (I just don't feel like arguing about a more precise definition with a random internet stranger right now):
It's kind of amazing how limited the vocabulary and breadth is in those statements. It's essentially the same 4 or 5 insults and threats reiterated again and again and again, with the most variability in the order of words - any resemblance of grammar is obviously optional in the first place.
So when I first saw that collection (a couple of days ago, so I'm not rage-struck anymore - maybe that explains why I'm not joining the chorus), I was considering if some trolls created tons of throwaway accounts and dug deeper - those are accounts that are primarily used for other purposes, with a wide range of profiles: old accounts, new accounts, students, adults across all professions (as per their bio). All of them with "normal" posts before the one listed there, and after.
So trolls creating throw away accounts are out. Since the accounts are still in regular use, it also doesn't look like a massive take over.
So there are really two options:
1. these accounts are taken over in some non-intrusive way by aggressors (eg. cross site scripting, reuse of simple to guess password)
2. people across the board (all ages, all professions, all sexes and genders) share such an annoying conduct[0], and it's likely that in your/my/anyone's neighbourhood there are a couple of such people otherwise living normal lives.
So, people are monsters, and there's no clear marker to pinpoint them except their conduct.
What else is new?
[0] I do assume that except two or three deranged people, they leave it at verbal threats like these - which doesn't mean that this is okay, individually or collectively.
Besides showing some crass personal flaws, it's just as frightening for the target since they can't know what's real and what's "for show". There's also a real risk that the truly deranged feel encouraged by so many people who seem to be alike on the surface.
That article is the one mentioned in the post you're replying to. There's actual, horrible threats there, but there's also many people who simply insulted someone else. The Verge seems to think they're equivalent. They're not.
The Verge also ignores death threats and postal threats this person's followers have made - as the parent said, when it's the side you like, it's fine, when it's the side you don't, it's harassment.
You mentioned I 'pulled in an anecdote and don't give a shit'. Harassment normally involves contacting someone, rather than simply insulting someone, so that case seems quite relevant. Re: 'don't give a shit', I do care about this issue, I can simply see the obvious inconsistencies.
I think there's a problem, but not with Twitter the company. I think it's with people who talk about nuanced topics over 140 character sound bytes, and the egos at play on Twitter.
It's not. I mentioned it because The Verge said insulting someone was 'harassment'. It isn't, either by the common definition or by Twitter's terms of service.
A simple test: if you think insults should be considered harassment, would you support your own Twitter account being blocked for insulting someone?
People argue over seats on a train. Someone calls the other person a 'selfish prick'. Does the insulted person have the right to remove their 'harasser' from the train? Do you think they should?
If you or a member of your side insulted someone, would you support yourself/them being banned from Twitter?
> The train conductor reserves the right to eject passengers who are being disruptive or abusive of other passengers in that scenario.
That's correct. So is the person who took the seat disruptive, or the person who insulted the person who took the seat disruptive?
Where there are only insults, and not threats involved, this is not a clear matter of right vs wrong. Both sides are saying things about the other that the other side doesn't like.
And repeating the question:
If you or a member of your side insulted someone, would you support yourself/them being banned from Twitter?
The "relentless" part is critical for whether insults (or even just continued asking of questions or repeating points) are harassment. Especially if the insult isn't profane - e.g. "idiot". To what extent is it OK to tell someone they are being an "idiot" or "are an idiot" when they say something idiotic. Some people have even taken to being insulted by being called "sexist" or "racist".
The difficulty comes when someone is relentlessly harassed by a large group such that each individual's insults on their own wouldn't be harassment. The recipient is being harassed but I'm not sure what the correct response is in this case. Also it is difficult to see the size of the group and who is actually involved if there are many throwaway anonymous accounts. There is also a hard to make distinction between a large group responding to something and calling it out following it being retweeted (which might briefly feel like harrassment) and a group organising and deliberately trolling people or jumping into every conversation they have.
These are tricky questions and I don't have answers to everything. Threats of violence while horrible are at least a simple and clear cut case that can be responded to (ideally with at least a police visit/warning rather than just a ban from the service). That does not make them the only sort of real harassment but it is in a way easy to deal with.
The "relentless" part is critical for whether insults (or even just continued asking of questions or repeating points) are harassment. Especially if the insult isn't profane - e.g. "idiot". To what extent is it OK to tell someone they are being an "idiot" or "are an idiot" when they say something idiotic. Some people have even taken to being insulted by being called "sexist" or "racist".
This is another sort of claim that pops up a lot in freeze peach[1] debates, where we pretend that reasonable people do not exist, and treat all claims as somehow equal and thus intractable. This is complete nonsense, and something the sane society largely doesn't have as much of a problem with as the internet does, at least as argued by those still desperate to defend the right to an audience for invective.
I do agree with you that it can be picked apart but it is very hard to do with simple rules.
Society does have problems with it in many areas but most noticeably to me around the intersections of race, religion and politics. Can criticising the Israeli government be anti-semitic? (It probably can be but the accusation is levelled as a defence against normal criticism). At what point does criticism of Islam or particular practices become problematic, Islamaphobic or racist? There isn't a clear line that everyone can agree on.
I think we probably agree on the answers we would like for these things and especially our views about "those still desperate to defend the right to an audience for invective" but it isn't always easy to set rules and operate systems to keep it working.
Coming from experiencing extreme verbal abuse offline, id counter by expressing how even just talking about and repeating forms of verbal abuse is dangerous.
Maturing moderation through systems such as Aether (http://www.getaether.com) is going to need free speech to evolve systems to filter freely.
Social media support group systems struggle with talking about trouble directly because we're waging war on words, when we still need to start addressing battles.
I am totally onboard with better self-moderation tools; I'm a proud user of BlockTogether and theBlockBot, they've made Twitter almost 100% more pleasant to use than it was before.
I've even had some ideas myself on apps that might provide better platforms for some, I just haven't got the chops to pull it off yet. In particular I think we need more platforms that allow public address without the implicit expectation that doing so makes you open season for every anonymous troll on the internet.
I think that services as well can do a better job turning all this 'big data science' to problems like this; it doesn't take a genius to realize that if a Twitter account pops up with no followers and its first few dozen messages are all loaded with invective and little else, maybe that account wasn't made in good faith.
Committing ignorance can be read as trolling and dangerous, so problem-solving by forcing a win-lose memory process, ends by repressing asking to account for any deeper empathic accuracy of a real stable log.
What genius is let go is if we can design a system that lets users filter, and share filters, without categorically/deleteriously removing parts of people's real life data we have trouble with.
If we want to wholly deal with safety, sanity, abuse, pain, and words we equate power to record, we need to be able to read and write by sorting with due diligence enough to keep every end together, every bit of speech a turn worthy of ID, and then let users tool ways to filter, tag, label, and identify problems openly.
Yes, you just wish to valiantly defend the right to berate someone in a public place, to the extent of dithering over which forms of base harassment you consider perfectly within rights apparently.[1]
This is the "freeze peach" argument all over again, and does nothing to dispute my point; it rather demonstrates it quite effectively.
So I ask again, since you're so fond of leading questions: why should the internet be 'special' as regards the consequences of being a twat to other people, be it in public or in private?
Twitter is under no moral, ethical, or legal obligation to allow you to call someone a "despicable whore" on their website.
> Yes, you just wish to valiantly defend the right to berate someone in a public place,
No. You're making up a straw man again. The 'freeze peach' thing is childish.
In my post you link to - https://news.ycombinator.com/item?id=9002479 I said that insults are not harassment according to the Twitter ToS. That is different from condoning harassment. You're probably smart enough to know that, but are trolling.
I never said the internet should be special: this is another straw man, and seems to be projecting a little about your own position: you don't have a right not to be insulted in real life, and you don't have one on the internet.
"I think there's a problem, but not with Twitter the company. I think it's with people who talk about nuanced topics over 140 character sound bytes, and the egos at play on Twitter."
Wait, the problem is with someone trying to discuss or broadcast the nuanced topic, and not those making violent/death threats, or the branded conduit doing virtually nothing to discourage such behaviour?
Communication is a rather complex matter and having any single party decide what a given message is supposed to mean misses the point (see for example https://en.wikipedia.org/wiki/Four-sides_model).
Even simple societies end up appointing judges (impartial or not) to provide an outside view on a conflict, instead of letting the participants decide on any claim's merit.
That's totally fair. However I feel that considering the current state of harrasment on Twitter, a simple definition is appropriate and will be more effective in practice than a more nuanced approach. I certainly hope Twitter and its trolls clean up their act to a state when we would have the luxury of using impartial judges.
I get your overall point, but in turn you've overlooked the Relationship layer - basic compassion means that the victim gets to make the call as to if something hurt them.
If someone is stabbed, do you require empirical measurements of their injury before you accept their pain?
How do you define the victim here? People are responding to someone who said a) they play games which give them extra points for murdering women (this is provably false) b) they hate women. Do you consider insulting a group to be harassment as well?
The theory states that person A is in a state on all 4 sides and encodes that into a message. That message is transmitted (in case of twitter: verbal only, no other cues), and then decoded onto the 4 sides by person B. By then, it can mean something else entirely because encoding and decoding aren't idempotent. They likely don't even weight the 4 sides in the same way.
Now, it's hard for me to imagine a less threatening interpretation of "Someone will rape and kill you" (to paraphrase most of the femfreq collection) than what the average person would make of it, but the rule I responded to was made a general assertion.
If the victim defines its status universally, the best way to live is to shut down all communication, since the sender has exactly _0_ control how things end up being received.
"The victim defines it" means that a simple "hello" can be considered abuse.
Given the right circumstances, that might even be true (to stick to the theme: stalker contacting victim), but western society (that twitter probably subscribes to) doesn't typically hand out sanctions without having a third party determine the circumstances. Taking your the stabbing example, accepting the pain (and calling the ambulance) is one thing. Throwing the assailant in jail has a higher standard.
The discussion is about sanctions which are closer to the latter than "accepting their pain".
In this case the service provider - Twitter - should define it, since they are responsible for implementing any measures against harassers.
Personally, I think any threat of violence - whether this is a realistic threat, or not - should constitute harassment.
A lot of the tweets in that link would qualify. I won't repeat them here, but there are a lot of threats that basically say the tweeter will rape and/or kill the receiver.
> In this case the service provider - Twitter - should define it, since they are responsible for implementing any measures against harassers.
Agreed 100%. The Verge has a different concept of harassment from Twitter's - you can't report someone on Twitter for merely calling you a 'bitch'. If The Verge wants this changed, they should say so explicitly. They should consider whether they ever insult anyone else.
Agreed about threats of violence. Whether to block unrealistic or not, I'm not too sure:
Tweets labelled as harassment by The Verge:
- 'You're a bitch'. Sorry, insults aren't against Twitter's ToS.
Not sure why this is being downvoted - diluting the meaning of 'harassment' could have very unfortunate consequences for both victims of REAL harassment as well as the general usefulness of social media. It seems reasonable people should be able to agree on what constitutes harassment and what clickbait factories like gawker and the verge are casting as harassment in the name of ad impressions.
I think anonymous accounts should be expensive (in terms of time) to set up. Anonymity and pseudonyms are important for those who need to whistleblow or are from oppressed groups so the possibility absolutely needs to be preserved. But what isn't needed is cheap throwaway accounts that can be used for abuse and then discarded as soon as they are banned.
If to sign up anonymously you had to do something, a quiz or play through a game that took about 30 minutes then that would reduce the rate of account creation for abuse. If you were prepared to give a real phone number (and use it for verification) then you could bypass the task and get an instant account but obviously any ban would apply to the phone number not just that particular account.
Like the governments that promoting security and safety at the expense of privacy and freedom, Twitter has a too bad a record of dealing with internal abuse to be trusted. For example, there were stories about them stealing usernames from people (because a celebrity showed up). Similarly, any kind of process they set up for dealing with external abuse can be abused; people will be flagging accounts they disagree with, hoping they would be banned, and other people would DDOS the system by flagging too many accounts, making it impossible for Twitter to keep up with it.
The solution is simple: set up a new service, see if users use it.
Exactly this, and what's so mind boggling is that this is not a new problem by any means. I'm sure that among twitter's worries is how to effectively scale an abuse prevention operation, but there are some pretty easy automatic filters that would go a long way. A few that come to mind (provided accounts are tied to phone numbers):
1) A block that prevents the harasser from even @mentioning the harassee in any way.
2) Automatic disabling of accounts that are blocked by more than two users whom the harasser explicitly @mentioned
3) A "dog house" mode which automatically disables distribution of a user's tweets who have been reported as harassment without explicitly telling them, leaving them to harass nothingness until they get bored and leave.
(replying to both since they're effectively the same comment)
Abuse of such things is definitely a challenge, and there is finesse on choosing the right thresholds. I doubt twitter can fully escape manual moderation and as such one of the goals of a system like this would simply be to reduce the number of situations that require human intervention. Undoubtedly, there will be false positives, but abuse of the abuse system will likely be rarer than abuse as a whole and if a human moderates and finds such meta-abuse it would be grounds for the strictest means of blocking twitter can provide, thereby hopefully incentivizing against it.
Blocking is an issue, since that can be weaponized way too easily: "I don't like your non-harrassing opinion, so I'll get two buddies to engage with you, then block you once you respond."
The "dog house" could even work for cliques: If a certain number or percentage of your peers (as determined by follow relationships and metions) block a user (for themselves), the platform could make them disappear for you as well (unless you engage with them first).
That should be good enough to keep adverse groups separated while avoiding the issue of "I'm not abusing them, I'm calling them out" that's already mentioned elsewhere.
What if a user could toggle a setting through which users of a certain type couldn't @reply them at all? e.g., accounts created within the last month, or accounts with fewer than 50 tweets, etc.
1) kind of already happens, they can mention but it's not linked, which is laughably enough to prevent a significant degree of mobbing by their own followers.
2) Someone in the atheist-skeptic community made a tool called the blockbot that ostensibly was to automatically block people from a list that you never want to talk to in the first place because the blockbot operator already identified them as trolls. Unfortunately the main reason was that the developer figured out that mass blocks caused an automated response mechanism to suspend that users. It was really an offensive, not defensive mechanism. This has been somewhat changed, but the reason I bring it up is that this has been tried by Twitter and unscrupulous people used it as an attack tool.
3) is similar to 2) but I suspect they are already doing this in some cases.
I don't think anonymity is the real issue, although it makes trolling easier. Services such as Facebook and Youtube require real names, yet they still have lots of people trolling each other. I think it is rather the feeling of being able to get away with it which causes people to have an illusion of anonymity, whether or not there name/phone number is actually associated with their account.
I'm curious if it isn't feasible to do traffic analysis and nip trollers in the bud when it turns out that one or more twitterers are sending a large number of tweets to a single person without an actual conversation taking place.
> I don't think anonymity is the real issue, although it makes trolling easier.
Anonymity isn't the real issue (though it's a compounding factor), the issue is how easy throwaway accounts are to create: for victims of abuse, it's not possible to "manage" abuse by blocking abuser accounts because within seconds they can be on a new yet-to-be-blocked account. In fact, I'd expect most serial abusers don't even bother waiting, they create an account, throw a salvo of abuse, create a new account, new salvo, and the account creation can be trivially automated.
If accounts are either cheap and unique (based on "real-world" identity) or relatively expensive (time-wise) but anonymous, the ability to block people becomes far more useful for the serially abused.
> “What terrorist group does this guy belong too?”
> “this is horrible, Facebook should be promoted by a different person, i have lost a lot of respect for Facebook because of there choice to use this person.”
> “Can I borrow your towel on your head I ran out of toilet paper please sir?”
> “What’s up with the rag head?”
> “Get that Isis terrorist off of your ad”
> “Get this camel jockey off my news feed pronto!”
And they can be banned with a real cost to the owner. The decision about whether or not particular things cross the ban line is separate from the design of a system that imposes a real impact when the ban occurs.
Also in the context of harassment they can be blocked by the person being harassed
Definitely worth considering but any significant impedance to joining the network would have a financial cost for Twitter in terms of adoption, and would never get up.
Teamspeak v3† solves this in an interesting way: the program supports creation of anonymous (or rather, pseudonymous) identities easily,
each associated with an unique ID which takes adjustable amount of time to compute on users' computers.
The adjustable parameter, called "Security Level", is expressed as a small integer.
To protect from flood of new (spam) identities, channel owners / operators can select minimum difficulty allowed on the channel.
The higher the Security Level required, the more time it takes to create such identity.
Values around 24 represent a dozen seconds worth of computation, while around 28 is a few minutes.‡
This easily discourages casual spammers, while legit users have no problem with spending a few dozen seconds, or a few minutes to create an identity.
She is clearly receiving harassment and is prepared to publish it, why the hell wouldn't they bring her into it? And if you think she gets too much exposure why did you mention her?
I'm only aware of her because of the harassment her and others have received. The people trying to silence her with abuse are amplifying their opponents and discrediting themselves and proving the need for change in these areas. There are clearly some people going out of their way to try to bully vocal women into silence and I'm glad it isn't working.
Twitter isn't alone in this. Facebook and Snapchat both have issues with harassment. YikYak had a bad case of it too. I think that if these companies really care about curbing harassment, they would approach each other and tackle the issue together. If someone is trolling on facebook, they probably do it on twitter too.
Riot Games (League of Legends) has The Tribunal, where players who harass get judgements handed down from other players. Perhaps social media needs a tribunal of sorts, with participation from Twitter et al to enact "sentences".
Finally, a significant portion of Twitter is quite literally trolls trolling trolls. They both can dish it out and take it and don't see the problem. Good for them, but it's not compatible with everybody else.
reply