I understand that that is the current state of the law. I’m not disputing that there is currently no legal framework to force Reddit to allow content.
I’m arguing the “ought” rather than the “is” here. I’m arguing against the philosophical position that simply because Reddit is a private website, they are entitled to control the content as they want. In general, we regulate private enterprise when it has negative externalities on society, and I think there is a discussion to be had about what those externalities are here. Considering the historically unique position big tech companies are in, I don’t think “private companies can do whatever they want” is a sharp enough tool to use to dissect this issue.
This is 100% true. However, they are primarily concerned about the content for the content's sake (as you can see from their ban policy). Reddit is not concerned about the content, only the legal liability associated with the content (otherwise a whole bunch of other subreddits would be on the chopping block).
But, to be clear, at the moment Reddit is a moderated cesspool. It's neither embracing free-speech, or quality-control... it's managing to irk both sides and remain unsure of what kind of platform it really wants to be when it grows up.
Reddit has said over and over again that they won't police content on their site, unless it is one of a few things, doxing among them, or it is illegal.
And the moderation that makes Reddit hold valuable content is done by its users on a per subreddit status. Only stuff that could break laws like extremist content and hate speech is handled by Reddit themselves.
It's really odd to call it "their" data, and this is not exclusive to Reddit.
The problem is with mixing pure + vile. Nobody cares that Huffpo or TMZ hosts sideboob galleries or privacy invading gossip just for clicks. People do care if "show off your baby pictures" is hosted next to "we sell meth at discount rates anonymously."
(Reminder: reddit is just centrally controlled usenet with some wonky turducken corporate structure. nobody ever claimed usenet had to moderate standards, but your provider could filter you if you wanted to be filtered. the problem is... you can't apply arbitrary filters to reddit (outside of nerd-fueled browser extensions) because they aren't just a message provider, but they are also the closed, proprietary message reading interface.)
Reddit once being a haven away from large corporate interests, has continually succumbed to the mainstream narrative of what’s appropriate.
Sure, there are communities that are horrid, but then there are those that are on the line. And inch by inch that line moves from moral goodness, to corporate interest.
Reddit is no different than Twitter/YouTube/Facebook, only they haven’t covered the same amount of ground.
Given another few years, if that, we’ll see the same censorship practices on Reddit, with the same magnitude, as youtube, facebook, and twitter.
This is essentially account entrapment. Show content they know is in bad faith, then punish them. Or, once a post is deemed in bad faith, retroactively punish them for not having the correct opinion.
Reddit continues to insist that its sitewide administrative policies are based on behaviour rather than content, though it appears that this is a somewhat narrow distinction, and that behaviours which draw attention … tend to be associated with questionable content.
I’m not criticising the action. I support it. (The reasons are complex and difficult to articulate, though what I had to say ... on Reddit ... about limitations on speech some six years ago seems strongly appropriate. https://old.reddit.com/r/dredmorbius/comments/2g8e8c/shoutin...)
It's the rationalisation which seems thin.
The more so as what I’d based that argument on at the time --- falsely claiming no harm where a harm clearly existed is precisely at the centre of current discussions of the topic. This also seems to be a major, though under-discussed mode, of deceptive speech, and more pointedly a mode in which the downplaying of risk accrues benefits and gains to the parties promoting that message.
That said, Reddit’s lack of principled leadership and very-late-to-the-party redress continues to erode trust in the platform among those who live in a reality-based world and support strong epistemic systems. Which is one of the key challenges the firm faces: neither of the two principle sides in this matter are or will be happy with how it aquits itself.
I’ll note as well that the principles of “free speech” are not synonymous with the US first amendment (which concerns only government limitations), that speech on a privately-operate platform is both not the same as government censorship, but also not dissimilar in many regards, and that in any regard, free speech itself is not an absolute principle but one existing in balance with other considerations. I’ve been thinking in terms of a set of related, though often conflicting principles as Autonomous Communication (or variously: "informational autonomy" or "communications autonomy" --- naming things is hard --- discussed in “Which has primacy?”). The rights to privacy, free-assocation (both positive and negative), against self-incrimination, of obligated disclosure, and to accurate information, all collide, though there are some common principles which might help in adjudicating amongst them. I’m not aware of others offering any similar construction.
It's a difficult spot because social networks try to maintain a fiction that they're a public forum where free speech must be respected. As far as I know there is no legal precedent that this is so, and they're free to ban users/cull posts from their web sites as they see fit. Additionally, I'd say they have a responsibility to prevent their site from being used for certain purposes, though the boundaries of what's acceptable are tough to define.
Reddit is not like an ISP, who can (until recently) claim that they're not responsible for traffic passing through their network, as reddit has no reasonable claim to be a "dumb pipe".
Check out their official channels. Anything on reddit is going to be toxic unless it is highly curated by the subreddit mods. That's just the reality of reddit.
I am ok with Reddit choosing what content is publicly available on their platform. It should go without saying Reddit should also be responsible for the content it chooses to serve.
If anything it's unfair competition against traditional media which doesn't enjoy "platform" protections.
This would be best. It's sort of telling that Steve Huffman, CEO of reddit, had this to say about reddit's content policy enacted about a year ago:
>When you draw really clear lines in the sand at a site like Reddit, “there will always be some a—–,” Huffman said, “looking for loopholes.” He eventually came to the conclusion that virtually every other major social site has come to: that content guidelines for online communities work best when they’re “specifically vague,” giving the contours of clarity on what sort of content is forbidden, while affording those in charge of enforcing the rules some leeway with when, precisely, the rules apply.[1]
That's the CEO of reddit saying that selective enforcement should be the default, and that users should not lean on the actual, written rules to determine what is acceptable and unacceptable.
Well, reddit has been policing content for a very long time. So far they haven't been held responsible for things that "get on the site but shouldn't".
What happens if reddit crosses that line is completely up to community response. People could go on as usual and nothing happens or there could be an even more visceral backlash. History says business as usual but no one really knows what would happen.
reply