Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

> Hogwash. We have harassment, stalking, libel, defamation, and slander laws in meatspace for a reason. The fact that doing it on the Internet exempts you from these laws because enforcement isn't "cost effective" is a problem.

All those are illegal online too. But if I go to the pub and discuss some harassment with my friends that I'm going to do later, the pub clearly isn't held liable for this. Why? Because even if he owns it, the pub owner has no responsibility for what is said inside of the tavern walls.

So, what is taking place on these forums is rarely illegal.

> The question at this point is whether we put laws in place such that individuals can seek cost-effective justice, or whether we wait until so many people get burned that they start clamoring for the government to step on the Internet with jackboots.

And then they move to mailing lists, or FMS. Then what? You going to shut down Tor as well?

My opinion is this: people should grow thicker skin. Those who fail to do so pose a threat to society, insofar as they call for further censorship. Therefore, anyone who is harmed by these people can post-hoc be considered to have deserved it.

It used to be the case that people were recommended to not use their real name on the Internet for basic safety reasons. Now they are encouraged to do so, and whoever doesn't is branded a 'troll'. Who is really at fault here?



sort by: page size:

>You and others have come onto this site and continued the harassment via telecommunications devices.

I use this forum everyday because it is a place for civilized communication with strangers, let's keep it that way, eh? This suggestion that I'm gaslighting or cyberstalking you is why the laws that define those terms are incompatible with a Free and Open society. You're just mad and want the government to make you feel better by getting some revenge for you and don't care if that effects other peoples ability to speak in the future about something actually important.


>Finally, it's time to re-examine the current legal system. Online hostility is cross-jurisdictional. We might need laws that directly address this challenge. There is currently no uniformity of definition among states in the definition of cyberbullying and cyberharassment. Perhaps federal input is needed.

I don't get it that people don't get that the internet cannot really be policed! How would a law in the States stop someone from Iran to bully someone in a forum.

Who can stop them are the website owners, moderators and other members. It is really like in real life, some websites are small like small villages and need only one policeman, others like Wikipedia are like cities and might need a stronger 'police force'. The 'punishment' is easy and swift you just ban the user!

What is needed are stronger algorithms to automate some of these chores, not more lawyers!


> The owners of such internet platforms should reflect on when their platform could be abused by the mass and they should interfere before it becomes a monster which cannot be tamed.

I actually agree, but I'm not sure that I would put that assertion to the force of law. There is then the conundrum of what happens in the face of selective enforcement (and there will be selective enforcement). Can someone then sue Reddit for not suppressing an investigation that doxxed them quickly enough, or at all, given that Reddit has suppressed other investigations?

If you do manage to always comply with a law saying this much it will only be by suppressing otherwise valid speech, which is also problematic. Also, can someone sue Reddit for being suppressed unnecessarily, or is it merely an inconvenience?


> That pretty clearly qualifies as a personal attack

Okay, let's take the obscurity up one level: I used to moderate a very small internet forum, where there was one user who had certain mannerisms in his writing. I also had reason to suspect this user was mildly suicidal.

A troll began to mimic these mannerisms as a way to make fun of that user, and talk about how horrible their life was. The troll wasn't overt about it, but if you knew the history between these two, you could tell what was going on. The troll of course feigned innocence, but as he'd been given warnings before, I made the call to ban him.

I firmly believe I was acting within the forum's stated policies—but I am not 100% confident a judge would agree. If these types of actions could lead to lawsuits, I don't know what I would have done.

---

> There is a significant difference between a forum and a business critical software distribution platform.

Great—but where's the line? Is Youtube critical, for example?


> A motivated harasser or mob can basically force you completely offline. You cannot answer that with "grow thicker skin".

If they have observed proper precautions, the psychological issues are the only factor that remains. Those are solved by growing a thicker skin.

> And what about if your business is online? Most states require that your public business filing include your real name.

If so, they should vote with their feet and observe proper precautions, such as incorporating in a jurisdiction with strong privacy laws, or not incorporating at all (Bitcoin). Examples of the former include New Mexico, Wyoming, Nevada, Seychelles, British Virgin Islands, and more.

> Your glib solutions betray your lack of experience with the problem.

No, your glib solutions betray your lack of experience with the problem. Censorship is never an answer as it is, unto itself, a moral failure.


> What's more disturbing is that you seem to think there's content which should be banned from being published.

If you'd read TFA, you'd know that this case is where the trolls impersonated a woman, setting up a facebook profile with her name and photos on it - so that people who knew her IRL thought it was her profile. That profile painted her as a paedophile and drug dealer. They did this just for fun.

I'm pretty sure that identity theft and libel (or the equivalent definitions in other legal systems) are illegal just about everywhere; and can and should result in this content being "banned from being published", and the anonymity of the people doing it being stripped away.

This law could be over-broad, I don't know, but if you find the take-down the most disturbing part of this case, check your sense of proportion.


> What we do online is a reflection of what we do offline. To think that there aren't offline hunts and people who go around shouting those things without the internet is absurd. To suggest this is why we need internet censorship easily lends its self to the argument that people's every day offline lives need to be monitored and controlled.

When you stand on a street corner shouting ‘This guy did it’ with a picture of said guy, how many people do you reach as compared to a similar post on Reddit?

Conversely, what is the status of other mass-media (newspapers, radio, TV etc.) regarding the information they publish? Will a radio ever broadcast ‘This guy did it’ without some fact-checking? At least here (Germany), that is not/should not be the case.

The very easy way to solve this problem would obviously be to improve defamation/libel laws to such an extend that wrongfully accusing someone of X gets you the maximum penalty for X as a minimum. At least people posting Wanted posters on Facebook should be easy to catch, then.


> By your standards, swatting is a perfectly acceptable outcome as long as the victim had the opportunity to mute their harrassers.

Folks don't get swatted via tweet, meem, or fb post. They have to pick p the phone and commit a real crime.

> Likewise, poisoning someone's reputation and making them unemployable is fine too. It's just words, right?

We have had laws against this long before the internet. Maybe they need strengthened.

Edit:

We are clearly talking about two different cases. We have things that are already illegal, and should continue to be illegal. Your examples are both something covered by existing laws.

On the other hand, we have folks who want to ban things that are legal, simply because they are offended. I can name a few, but in this clement, it would not be wise.

Oh, look at that. Self moderation.


> threatening to reveal their personal information, such as their legal identity

What's the problem with that? Bad things on the internet happen more often than not because of the lack of responsibility.

Doxxing has become the primary sin in the Internet religion but it would solve all kind of problems. I am going to commit that sin and say that Doxxing is the solution, you can downvote me and make my comment greyed out and censor me when you argue against censorship.

Instead of deleting content, simply make sure that it's linked to someone who can pay for it if it turn out to be something to be payed for.

The Anonymity argument is only good when you are actively persecuted by a state actor. I don't agree that you deserve anonymity because the public will demonise you. If you hold strong believes that can be met harshly by the general public, you better be ready for the pushback and think of ways to make it accepted. That's how it has been done since ever.

Therefore, when a content is questionable maybe the users should be simply KYC'ed en left alone until a legal take down order is issued. If its illegal(like illegal porn, copyrighted content, terroristic activities etc), go to prison for it. If its BS get your reputation tarnished.


> Plus it's hardly right that you can threaten someone just because "lol it's the internet brah"

Unless we lock down the internet and give up anonymity then it's something we have to accept.


>I hope you can see there's a difference between someone saying "FYAD" and someone sending hundreds of death and rape threats per day from multiple different accounts.

I absolutely do - and I agree with basically everything you say. What I'm not sure about is where the law draws the line.

Twitter [social media] feels like a real problem for abuse, because instead of saying "anything goes, leave or stay at will" or moderating according to community rules, they claim no responsibility or interest and simply refer to the police.

I don't have answers, and I know defending my right to be a dick to someone for bad posting on some forum somewhere is somewhat pathetic. I just feel like "people could be fined or sent to prison for using deliberately harmful, threatening or offensive language" is dangerous. I'm pretty sure on the last one I've got some jail time coming - especially if the offence can be taken by someone to whom it isn't addressed...


> The whole premise of “doxing” being a problem is that people have a right to say things on the internet that would get them in trouble in real life.

Mmm. In some cases terribly dangerous things to their health, like being gay in a country that forbids it. Or criticising a government that forbids it. Or saying something that only becomes illegal or publicly reprehensible after the fact.


>Personally I feel this is law is a great step forward to stop hate speech and other forms of severe harassment

... maybe if we somehow knew and trusted the results of whatever computer systems were being used to condemn the guilty -- or had some classical and well known and agreed upon definition for hate-speech that was internet-spanning... but we have neither.

the reality of it is that data moves in a lot of imprecise ways, and you'd better really hope that the hammer never falls accidentally on the innocent , otherwise such policy causes more harm than good.

My opinion : this will become unintentionally weaponized, like DMCA reports -- and worse yet it'll be intentionally weaponized, too. It's too easy to make data look like it's coming from elsewhere -- what happens when your node is the one that's somehow coerced into breaking the law by some malware, bad actor, or kid with abusive remote administration privs?

we as a society should probably get a grasp on data and data flow before we throw folks into a prison for things the data flows indicate; 'computers as witnesses' is too falsifiable and imprecise as a concept at the current stages of things to be used in good faith.


>Are you suggesting that there are populated places that you can visit in real life where loudly shouting an arbitrary statement could not attract abuse?

I'm suggesting that if I were in a public place and had a person shouting obscenities at me everywhere I went... it's illegal harassment, and the person can be identified and prosecuted.

I've experienced both IRL and online harassment, and managed to stop IRL harassment through the legal system. I was only able to stop online harassment by hiding.


> they don't break any laws

> file lawsuits

And here a glaring issue with this approach is apparent. Even _if_ the harassers meet the criteria for law-breaking, anonymity and civil suits having the burden of proof be on the plaintiff make this extremely difficult. I can think of any number of defenses, starting with "Someone guessed my WiFi password / took over my forum account."

It is far easier to pressure companies to de-platform hate than to individually find and file suit against the harassers.


> A prominent cancer scientist, unhappy with the attention his research papers have received on PubPeer, is suing some of our anonymous commenters for defamation

On the other hand, should anonymous commenters have the balance of power: in other words, say whatever they want with impunity, even if it actually is defamatory?

(Not saying that is the case in this situation, but in general).

The problem is that defamation is a legal concept, which can only be tried legally. So for instance, whereas a site can rigorously enforce rules which say that all comments are directed at the research material, and not at persons, and have a factual basis in that material, those measures cannot take away the right of someone, who feels they have been defamed, to take the matter to court (where they will almost certainly lose, which is neither here nor there).

You can't just create a site and declare it above the law, so to speak.

The only way to protect the identities of the anonymous is for the site to take responsibility for the statements it publishes: to assert that the anonymous statements are subject to rigorous standards of review, and when published, they in fact reflect the views of PubPeer, and PubPeer alone, and not of any anonymous persons (who do not publish any statements, but act only as sources of information).

Then if someone feels they have been the target of defamation, the defendant shall be PubPeer.


> tying the behavior to the person behind the behavior and bringing proper consequences to bear is the only regulatory system that actually works

nope. trolling online is not a crime, so no regulations are needed. its the same as if someone walks up to you in real life and calls you a jerk. you dont get to do anything about that. you just move on.


> better written social media sites just have [Block] button where I can choose to not see given person's ramblings.

Or I could sue you for your ramblings. Also, blocking means I don't see the content, others seeing the content is harmful in way of slander or illegal porn,etc....

> If it is illegal it should be taken down, if it is not it should not. Sharing sextape without consent of the partner is illegal AFAIK, offender gets reported and court/police orders site to take it down. > Moderation have NOTHING to do with it.

Moderation has everything to do with it. If you don't take down after knowing it is illegal you should become criminally liable and where the law allows also civilly liable.

> This law was about moderators and site owners not being liable for stuff users put on them, not users themselves

Exactly but when they knowingly refuse to take down content they know is considered illegal or civilly damaging they should be held accountable.

> There might be subreddit filled with everything you hate but you ain't getting it in suggestions by algorithm (as suggestions is not main front of reddit, unlike twitter/facebook) and you don't have to go there

See above. Me being bothered has nothing to do with it. If you slander me promote content that is in anyway damaging to me I will sue you and complain to moderators. If mods refuse to take action then let the court decide if they should be held co-conspirators. There are so many types of such damages but an extreme example that is pushing for repeal of 230 is cp or revenge porn. Would you tell someone to block users that post naked pictures of them or their child? If they ask a subreddit and reddit admins to take it down and they don't comply then both reddit execs and mods should be held criminally and civilly liable.

> Your suggestion is giving moderators so much work the site would be unprofitable. How on earth you'd "verify the provenance" of a meme subreddit? Or you want moderators to google image search every image posted ?

They should limit members to a volume of managable daily flagged posts they can review. I never said meme provenance that comment was specifically about porn. You are grasping at straws here. But if someone made a meme that is damaging to me, I should be able to sue mods and site operators when they refuse to take it down even after I flagged and reported the harm being done to me. Since their refusal to moderate is an explicit decision and the damage inflicted is well known to them (by my flag/report). I don't care if a bunch of subreddits die off I care more about actual harm to people being reduced.

> And moderation is always convenient for site operator, especially if they don't pay for it.

Again with this shit. Have you never heard of stormwatch, kiwifarms,4chan and 8chan? How is moderation profitable for them or for random porn sites? Even with reddit, engagement is profitable not moderation. Most of reddit is porn, is it profitable for them to moderate that? Users don't care who else gets hurt so long as it isn't them or their "group".

> If only take downs were for actual legal cases we'd be in much better place. What social sites define "harmful content" and what law does is vastly different, like the recent disaster with vaccine communication.

I only care about what the law has already defined as harmful. In a way it would give them guidance so they won't have to be arbiters of what is harmful. Is calling someone a racial epithet illegal? No, but You can get sued for defamation and "emotional distress" or whatever depending on the state so they can now use that as a guidance instead. But they can still moderate on their own terms in addition to the law if they choose to do so just not in ignorance of it.

> Why you want to prosecute people clicking the arrow button on the website IN THE FIRST PLACE? That's some 1984 shit right here. Prosecution for unlawfully liking a picture, wtf is this shit ?

Alright, how about a stipulation that your like generated some material gain to the site or original poster? Because stuff like materially supporting a terrorist for example is illegal so if a terrorist posts white supremacist violence or jihadist content people who upvote that get prosecuted for materially supporting a terrorist for even a cent of ad profits.


> It is bizarre to me that one could post literal death threats online endlessly without fear of any consquences

Maybe you phrased it wrong, but the user cannot post death threats online without fear of consequences, it's just not the platform who the consequences fall upon.

A better analogy to a public location would be holding the city liable for not removing a death threat someone affixed into some city-owned monument.

next

Legal | privacy