Maybe they are just trying to reduce their liability. Once they start actively policing content that isn't flagged, they will be held responsible whenever something gets on the site that shouldn't. If their policy is to only review content that was reported, it's much easier from their end, and they don't have to start making as many decisions about what is OK and what isn't, like sites like Twitter have to.
Their TOS and policies are vague enough to allow selective enforcement, which is a complaint that I've seen many times. Fewer people would take issue with Twitter (and all social media) if they were more transparent.
The situation for Twitter is messy no matter what they do. A report link also opens them up to a tiny liability of 'I reported it and you did nothing', so they'll likely default to leaning on ban first ask questions later. Again, just like Youtube has. Messy, but interesting to follow what they do.
So if they contact the WSJ and try to get them to spike a story that's fine, but if they tell Twitter "we think you should remove these tweets for violating your policies" that's not? I would love to see what the case is because I can't understand how that would make sense.
Why? Their purpose is to stop platforms from getting sued for copyright violations. They are remarkably successful to that end.
Their goal isn’t to allow all content that wouldn’t get them sued but to catch all content that would. False positives get you angry messages on Twitter but false negatives get you multimillion dollar lawsuits.
I'm gonna guess it often isn't even their content but is user content they are protecting. So, sounds like a big subsidy/protection racket for Twitter or whatever to train on their users' public content but not let others.
Why would they want to police it? Surely Twitter is perfectly fine with you posting screenshots of things of interest (obviously within obscenity and legal restrictions).
But aren’t they just flagging Tweets as potential violations of TOS? That sounds as tame as it could possibly be. The same option is open to me as an ordinary user.
So rather than clarify the new Twitter Policies, it would be more interesting to be told why they are making these changes.
Everyone is assuming it's because they want to lock down their platform... & lock it down along what seem pretty moveable and arbitrary lines. Why else would they do it? I'm a casual observer, but I have been following pretty carefully, and it's completely unclear to me.
So -- If it's not, they should just say. If it is, the Twitter developer ecosystem has plenty of reason to be worried about their homework.
I actually think it's a great compromise, but I do wish users could see which Trust and Safety partner determined each violation on a tweet by tweet basis.
I.e. Twitter in partnership with [GLAAD; ADL; etc] determined this tweet contained [violation]. Then we could quantify, drill down and get some great insights.
But if that were true then they would be personally liable in a court of law for tweets that break the law. Seems like they want to be treated as both an "editor" with the right to change user content and "just a distribution platform". They can't have it both ways.
reply