Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

I think you might have missed my point- you said that there could be a checkbox for people to hide offensive content, and I asked what the difference between manipulating someone’s feed so their posts are only seen by people looking for it directly, and having a content filter on by default. They would be functionally identical in how they limit the audience, which if I understand correctly is the main ill you are accusing twitter/etc of.


sort by: page size:

Again, I disagree. Twitter came up with a way to make some posts more widely shown, and you're trying to tell me they don't have a way to make some posts less widely shown? As someone else said, if there are a lot of comments and few likes, don't put it in the trending feed. That's one solution for free, and I don't even work for Twitter. If it's two people having a conversation back and forth, the broader Twitter audience doesn't need to see it. It's not censored, it's not hidden, it's just not broadcast either.

People have become millionaires, billionaires even, for the exact opposite of what you say. You become rich by making sure controversial content is spread as far and wide as possible, because hatred and fear sell as entertainment. People get addicted to it. You don't become rich by filtering out hateful content, you become rich by enabling it and spreading it because that's what people want (as long as they're not the target).


Your comment goes against what you originally said then, which is that you 'don't want some wanker at Twitter or Facebook office' to decide that. You're making a concession that you do want them to decide SOME content that is by default blocked, which then this becomes a matter of degrees. Some people would disagree with your stance and think that it still violates free speech, hence Parler/Gab etc. Plenty of disturbing content is not illegal, which is where the gray area of moderation comes into play.

Even if you don't subscribe to people that post that, there's still the issue of network effects. They subscribe to someone that subscribes to someone that subscribes to someone that posts a beheading video that gains popularity, and so the algorithm prioritizes that traffic because it's within your graph. You can easily notice these effects just on Twitter if you pay attention closely to what it decides to show you.


I like this idea, but things like “tools to filter who you see on Twitter” rely on self moderating of content you’re exposed to. We all already have that capability in many respects.

Eg I don’t really use Twitter at all apart from the odd tweet that will be referred to me. I moderate what I see by actively avoiding most social media (aside from HN of course) because I’ve decided that this is the easiest way to avoid sub-standard content that doesn’t add value for me. (A sweeping statement but just for sake of argument).

So let’s say someone says something extremely insulting about a minority group - just on the right side of legal, but otherwise a disgusting remark when measured against social norms.

Do we say that the utility platforms shouldn’t touch this because it’s not illegal?

Because some of those subgroups of people and individuals with the moderation/curation responsibilities will proliferate that content rather than moderate it.

I’m not saying you’re wrong, and I’m not arguing for strong censorship - I don’t have a counter suggestion, I’m just thinking it through…


> "Hey Twitter, please block this crap" is users trying to block what they don't want to see.

No, it's users trying to censor what they don't want other people to be permitted to say or see.

Twitter users have a "block" button; they're free to use it, and thus limit what they themselves will see.


We're not talking about banning these posts, or hiding them, or censoring them. Just not showing them as widely as they do other posts. It doesn't even need to go as deep as "this is hateful", but rather "this has the potential to be hateful" or giving the author the ability to control how widely the message is being shared.

I see these people here trying to debate solutions like good engineers, but unless they work at Twitter, it's a waste. We can guess all day and come up with a million solutions but when it comes down to it, Twitter absolutely has the ability to control posts that spiral out of control. What they don't have is the desire to do so.


If we want to go “there” then can I deem all nudity be censored? Your fixation on viewing things as either racist or not racist, as information or misinformation is opinionated. I think it is wrong that people can tweet pornography on the platform without any censorship. I doubt the Gay Comunity would be okay with that being censored, those are the posts I am subjected too. The point is any form of censorship will hamper someone’s freedom of speech. Why can’t everyone just block people that post what they don’t like? The way I do on twitter? Otherwise if we are going down the block what I don’t like road so will I!

So you are saying that banning should not be based on whether or not tweets are clearly hate speech, but rather on how many people see them? That doesn’t sound like a good plan.

That doesn’t break my vision. It’s not the whole Internet, it’s just one bulletin board. People can not go there at all if they don’t want to. If Twitter’s moderation was chosen by each user, for themselves, we wouldn’t have this conversation (though no doubt we’d be having a different one). People filtering for themselves doesn’t touch on what I’m getting at with this thought experiment.

> Those aren't even slightly equivalent positions, though. One is a request that material dealing with shocking or upsetting subjects be marked as such so that people who may be affected can avoid it, and the other is unsubstantiated racism that has been known as wrong for a century or more.

Technically speaking they're categorically equivalent in that they're both sanctionable under freedom of speech. From that perspective, whether or not you find one or the other more sanctionable has more to do with your personal opinions than it has to do with accuracy.

Since Twitter is a private organization it can freely decide to curtail freedom of speech on its platform, because that right is not guaranteed (or rather, relevant) in the private sphere. But that puts it into an uncomfortable position because it must then start being opinionated itself. Which opinions are the "right" ones? Which curtails are merely reducing spam and abuse, and which ones are censoring honest but unpopular opinions?

Now that aside, I agree with you that there is a difference between broadcasting unpopular opinions and targeting unpopular opinions towards another person. This makes the situation more difficult for Twitter, but there's a credible argument that curtailing the expression of those opinions in either context is censorship, while curtailing the expression of those opinions in the targeted context is spam and abuse reduction.

Twitter is fully allowed to engage in censorship, but it's probably strictly more beneficial from an inclusionary standpoint to invest in sophisticated tools for empowering users to reduce the spam and abuse targeting them (or which they come across) than it is to invest in sophisticated tools for curtailing "wrong opinions" across the entire platform. Most people can agree on the most extreme opinions that should be censored (though obviously not everyone). But it's hard to make companies the arbiter of which other opinions to get rid of, because from the obvious extreme ones there is a long, long tail of benign opinions that many people will reasonably (and unreasonably) take offense to.

For what it's worth, I think the situation is extremely complex and companies like Twitter are (somewhat unfairly) demonized for not doing enough, or doing too much to reduce the problem. But I don't think there's a substantially easier or more straightforward way of navigating these waters than Twitter has done, given the same capability and information.


What's to build? Twitter already built all the tools to ban, censor, and editorialize content. OP is simply suggesting that Twitter use those tools less.

Thank you for the detailed response.

Correct me if I am wrong, but I have a suspicion you don’t use Twitter, you study it. Thus you are viewing the fringes, but perhaps not aware of the average user’s experience.

AFAIK Twitter uses a demoting algorithm in order to demote offensive content. A different form of moderation that doesn’t silence. I don’t think the previous iteration of Twitter with strong moderation made it a better social media site (nor a forum for free speech).

I can only speak to my own experience using Twitter near daily, but I have almost never come across hateful content (in whichever political bend it may take). I actually found it a great resource for my interests like AI, psychology, etc.

Thus statements like:

> The lack of moderation is one of the reason why Twitter is such a horrible place - but there are other causes, it basically encourages toxic behavior and passive aggression due to its rules and their inconsequent and inconsistent enforcement.

Or

> It's just the typical content you get on Twitter.

run very counter to my own experience on the platform.

Do you actually use Twitter? What kind of people do you follow? How does it differ from your own experience with other social media sites?


Your Twitter is a self curated feed of the people you want to see. The idea that people intentionally seek out stuff to hate is a people problem and not a platform problem.

A problem I've noticed is that offensive content seems to float to the top. Outside of Twitter I rarely see the kind of hatred I'm about to refer to.

For example, the other day Trump RT'd a tweet by an account with a name like 'White Genocide'. I kid you not. Anybody that clicks this is going to find very unpleasant material about 'the end of the white race' including photos of blood-covered Swedish women, etc. Not very nice. Another day, people were tweeting a Business Insider article on GitHub which directed me towards a series of tweets by some employee ranting about how white people 'cannot be taught empathy' and therefore should not be allowed into positions of power.

These are both very extreme political positions which ultimately revolve around hatred and dehumanisation.

It's just not a good user experience for this kind of thing to constantly be spotlighted by either Twitter, the 'professionally offended', or the 'professionally offensive'. I'd love to be able to tick a box which just says something like "just don't show me things that are going to cause me to lose hope in the human race". Seriously.

This level of hatred doesn't seem to appear on Facebook or on the front page of papers. I don't see why it should be such a big part of my Twitter experience.

I believe in freedom of speech and I'm anti-censorship but in advance I'd like a setting which would help me to avoid having to consciously decide not to see this material.


In that case, it is impossible for Twitter to achieve neutrality (in that they can't control how viewers respond to a particular individual's odious tweets). If anything removing these sorts of tweets was a nice gift to bigots, covering their tracks for them.

Twitter may be full of nazis, but when I am looking at a feed of cute cat pictures I don't see any of them.

That they merely exist shouldn't matter, what matters is giving users the tools they need to look at the content they want to see instead of random crap injected by others. Choose your filter bubble.


Twitter has been very sloppy in maintaining a clear line between abuse and unpopular opinions. Twitter has value because it's a relatively open place where anyone can interact.

When twitter starts having strong opinions about what content is acceptable or not outside of abuse, then it becomes a very liberal echo chamber which drastically biases all the conversations that go on in it.

The issue, and this isn't a twitter specific issue, is that the line between preventing abuse and censoring content is not a clear or distinct line at all - some users want every art installation to have a trigger warning before it while other users hold the firm political opinion that the mixing of different races is unethical.

It's not clear that Twitter (or Facebook, or Reddit) has ever been good at navigating that line, or that they should be in that business. Reddit is somewhat successful in that most of the moderation happens at least at a subreddit level, so that if you disagree with a given community you can simply move to a different one.


Could they have implemented a voting ring detector instead, and not based it on what the content is?

I wonder if toxic content on Twitter could be dealt with more by limiting content that only gets engaged in an extremely polarizing way, with lots of brainless replies, or maybe content that is way more interesting to a person's followers than the rest of Twitter should be voting-ring-censored.


"What Twitter could do is just stop promoting and spreading content people don’t like."

Isn't this censorship?


I don't understand why Twitter doesn't generate some general "white/blacklist" feature that people could opt in/out of? Things like, a "porn" filter, a "n-word" filter, a "faminist"/"mysogynist" filter, etc. Some of these could be automatic (e.g. porn and spam and virus links), some manual (e.g. "all people who this particular user disagrees with"), they could be curated by Twitter itself or by any other entity/user, and some general would be default/opt-out (e.g. porn, hate speech, gore), others would be opt-in (e.g. "no politics" filter).

IMO that solves most of the problems - allowing unbiased freedom of speech/communication, while still preserving basic decency for the majority of people. In addition, you'd have a capitalist market/competition for ideas - if a curator of a filter becomes untrustworthy (starts abusing their "power" by "censoring" too many voices) the filter could simply by forked, improved, and users would migrate to something better!

Edit: In addition, this would also solve all kinds of legal issues - you could simply make a per-country level filters that users connecting from that country would automatically be exposed to, but all such government-imposed censorship would be implemented in a very transparent manner!

next

Legal | privacy