Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Maybe they are just trying to reduce their liability. Once they start actively policing content that isn't flagged, they will be held responsible whenever something gets on the site that shouldn't. If their policy is to only review content that was reported, it's much easier from their end, and they don't have to start making as many decisions about what is OK and what isn't, like sites like Twitter have to.


sort by: page size:

Their TOS and policies are vague enough to allow selective enforcement, which is a complaint that I've seen many times. Fewer people would take issue with Twitter (and all social media) if they were more transparent.

The situation for Twitter is messy no matter what they do. A report link also opens them up to a tiny liability of 'I reported it and you did nothing', so they'll likely default to leaning on ban first ask questions later. Again, just like Youtube has. Messy, but interesting to follow what they do.

So if they contact the WSJ and try to get them to spike a story that's fine, but if they tell Twitter "we think you should remove these tweets for violating your policies" that's not? I would love to see what the case is because I can't understand how that would make sense.

Why? Their purpose is to stop platforms from getting sued for copyright violations. They are remarkably successful to that end.

Their goal isn’t to allow all content that wouldn’t get them sued but to catch all content that would. False positives get you angry messages on Twitter but false negatives get you multimillion dollar lawsuits.


I'm gonna guess it often isn't even their content but is user content they are protecting. So, sounds like a big subsidy/protection racket for Twitter or whatever to train on their users' public content but not let others.

But twitter supports this feature. Why is it their responsibility to enforce this?

Why would they want to police it? Surely Twitter is perfectly fine with you posting screenshots of things of interest (obviously within obscenity and legal restrictions).

Not flagging because it is more interesting than usual for a link about Twitter:

it's the first time the law is used, and it also _strongly_ hint that they tried first to get access to the informations without legal proceedings.

The legalese is strong, but I think most of it is understandable (which is surprising).

But: Twitter is now like the 12th social network, who cares?


But aren’t they just flagging Tweets as potential violations of TOS? That sounds as tame as it could possibly be. The same option is open to me as an ordinary user.

Twitter has always held policy violations close to their chest. I have seen numerous complaints from developers about lack of clarification.

yeah, that’s what i thought i read, thanks.

no outside person asked twitter to “handle” anything. they asked them to review it.


Because political advocacy groups consult Twitter on what constitutes violations: https://about.twitter.com/en_us/safety/safety-partners.html

It seems more like these features of twitter are working as intended. I seriously doubt its because its too hard for them to block these actions.

So rather than clarify the new Twitter Policies, it would be more interesting to be told why they are making these changes.

Everyone is assuming it's because they want to lock down their platform... & lock it down along what seem pretty moveable and arbitrary lines. Why else would they do it? I'm a casual observer, but I have been following pretty carefully, and it's completely unclear to me.

So -- If it's not, they should just say. If it is, the Twitter developer ecosystem has plenty of reason to be worried about their homework.


I actually think it's a great compromise, but I do wish users could see which Trust and Safety partner determined each violation on a tweet by tweet basis.

I.e. Twitter in partnership with [GLAAD; ADL; etc] determined this tweet contained [violation]. Then we could quantify, drill down and get some great insights.


Should not be a big surprise, Twitter has an absolutely questionable policy when it comes to content filtering.

Yeah but that doesn’t make Twitter’s new policy any less than what it says.

But if that were true then they would be personally liable in a court of law for tweets that break the law. Seems like they want to be treated as both an "editor" with the right to change user content and "just a distribution platform". They can't have it both ways.

Presumably twitter wants to carry some authority no matter what we want. What they did is in service if their own goals.
next

Legal | privacy