Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login
Twitter locks WikiLeaks account days before Assange's extradition hearing (yro.slashdot.org) similar stories update story
512.0 points by akvadrako | karma 5693 | avg karma 1.65 2020-02-17 21:53:57+00:00 | hide | past | favorite | 368 comments



view as:

This is a growing problem. I'm not sure what the solution is, but something needs to be done to stem the absolute power that social media companies have to arbitrarily censor voices.

Fediverse?

I don't think the evidence is that that's the biggest issue in general, and certainly not in this case.

blockchain?

Why is this being downvoted? Services built on top of the blockchain are censorship-proof. it's a viable option.

What services exist? I know of Twetch (Bitcoin SV) and something based on Bitcoin Cash - the fact I can't remember its name is telling...

It's a low-grade wisp of noise, not a real comment.

The solution to this seems pretty straightforward: force social media companies to choose to identify as a "platform" or a "publisher", rather than a mix of the two that gets to claim the most convenient aspects of both.

If they choose "platform", then they can take no responsibility for content posted, but also allow all content, and only remove content when it is required of them by the legal system (when the content is illegal and has been reported as such)

If they choose "publisher" then they are free to censor, "deplatform", delete, or restrict posting of anything they wish, but if someone posts something illegal, they take their share of legal responsibility for publishing it.


This would indeed be a good thing. I wonder if there's a push to do that already somewhere?

I believe this proposal would more or less be the DEFAULT were it not for Section 230 of the Communications Decency Act of 1996.

Section 230 has had both good results (bloggers aren't going to get in trouble because someone posted something illegal in their comments) and bad results (twitter/youtube/paypal were all allowed to nuke Alex Jones from orbit simultaneously, removing his ability to distribute content, communicate with followers, and accept payments).

I don't like or agree with Alex Jones (he's a wacko, but people have made some funny videos out of his freakouts), but people have to remember: if it can happen to one person, the same could happen to anyone. People praised this "deplatforming" because they didn't like the target. But this is essentially praising these sites for crippling someone's career without any oversight.


Limiting deplatforming is taking away Twitter/YouTube's etc 1st amendment rights.

If the choice of deplatforming someone is a 1st amendment speech act (which I am ok with), then the choice to not do so needs to be a prosecutable offense (at which point you are just arguing that everyone should be a "publisher"). (Then, if websites can't manage to exist in this legally-consistent regime, maybe we finally get research into distributed systems in order to have true "platforms".)

Did you miss a "not" there or are you saying deplatforming should be a requirement?

Would you say the same exact thing about the phone companies?

Phone companies are platforms in the same exact way. Do you believe their 1st amendment rights are being taking away, because they can't "deplatform" phone calls from people they don't like, or from their competitors?


> if it can happen to one person, the same could happen to anyone.

A guy stands on a hill holding high an umbrella in the middle of a thunderstorm and is then struck by lightning. "If it could happen to one person it could happen to anyone." Technically true, but you have a lot of control over the odds. Being a racist harassing villain is the common thread shared amongst almost everyone who has been deplatformed so far. Stay out of that territory, and stay off hills during storms. You'll be fine.


> twitter/youtube/paypal were all allowed to nuke Alex Jones from orbit simultaneously, removing his ability to distribute content, communicate with followers, and accept payments

That is not a bad result.


It's a convenient result in the short term for those of us on the left who think that Alex Jones' views are utterly repugnant, but ultimately these sorts of deplatforming actions are counterproductive. Alex Jones' fans will no longer be exposed to rational arguments from non-fans on the same platform. They'll instead become increasingly trapped in their own filter bubbles, and now have even more of a martyr complex. The cure for bad speech is good speech in an open forum.

Alex Jones' fans will no longer be exposed to rational arguments from non-fans on the same platform. They'll instead become increasingly trapped in their own filter bubbles, and now have even more of a martyr complex.

This is a common argument, and yet every study of conspiracy theories shows this isn't the case. Conspiracy theories thrive on reach, and in the absence of reach they tend to die out.


What is to stop a company from choosing to be a platform, and then use dark patterns to censor certain views? IE Google placing a competitor's sites on page 2 or reddit hiding a certain community from its popular page.

Laws.

In this case, if it were regulated and there was evidence they were doing that, a lawsuit could be brought against them.

I think the big platforms would love to be treated as platforms only. But the problem is that the platform owner is the best positioned to police the content. If they have to rely on the legal system to take things down, there is going to be a lot more nefarious activity that goes untouched. And I don't think the public or our politicians have the stomach for that.

> If they have to rely on the legal system to take things down, there is going to be a lot more nefarious activity that goes untouched.

The DMCA does a reasonably good job of handling that exact problem for copyright infringing content, while still providing due process.

Edit: I have no idea why this is being downvoted, what I’ve said is completely true. If you’re mad about YouTube’s terrible policies, they’re not related to DMCA. They have their own completely seperate (and highly oppressive) system for handling infringement claims (in addition to their DMCA obligations).


This DMCA?

https://twitter.com/JRhodesPianist/status/103692924465446092... https://www.eff.org/takedowns

The DMCA is fundamentally flawed and is a terrible model to base any future system on.


The DMCA is fine. It’s everything services providers to in addition to DMCA that causes problems. The DMCA process is very simply:

1. You receive a takedown notice

2. If it’s not valid, you submit a declaration that it’s not

3. Your content is restored and the claimant now has to take you to court if they still think it’s infringing

4. If you lied on your counterclaim, your now also guilty of perjury

The only thing it’s really missing is an imminent threat of perjury for frivolous claims (there also some obvious areas to make the process more efficient). But as far as a simple way of managing offending content, while still providing due process, the general format is quite good. It’s other unrelated policies that tend to get people the most riled up.

https://www.publicknowledge.org/blog/universal-music-group-a...


> 4. If you lied on your counterclaim, your now also guilty of perjury

In theory, yes. In reality, no. "I was not aware to the best of my knowledge..." because the DMCA claimant never vetted things, they just employed a bot to spam claims all over the internet.

It wasn't a "knowing" lie. So it's effectively without consequence or fall back.

Hint: how many people have been charged with, let alone convicted of perjury in the history of the DMCA?

Answer: None. Even in the small small small minority of counterclaims that have ended up in court, with the most egregious lack of standing, the worst that has happened is (and even this only in a single digit number of times) awarding of costs.

Those are pretty good odds if you're a content creator (or copyright troll).


Copying is a fictional crime.

Because they don't give tools to law enforcement agencies to enforce their local laws. As a platform, it would be within their power and their right to do so. I think that is preferable to them doing the enforcing themselves.

> give tools to law enforcement agencies to enforce their local laws

Is that really a world you want to live in?


We already live in that world. It's just that we have another master on top of that.

Well the whole discussion becomes moot from that point of view.

Not quite. Facebook can kick you off their platform, but they can't bring you up on charges and throw you in prison.

That is a solution only valid for US and for the current legislation since it was introduced in 2006. It can be changed at any time.

For example other countries may use of laws (EU Arricle 13, for instance) to bend social media [1], and Facebook has accepted to have Marlène Schiappa’s (our Minister of Equality) agents in-house. Law is the embodiment of the current political power balance, not a fixed referential to stake culprits upon. And for the moment, social media are the power in the balance, so it’s hard to use the law against them.

[1] https://juliareda.eu/2019/12/french_uploadfilter_law/


So you're saying that Social media sites have more power than the elected government of a country, and you're arguing that's a a GOOD thing? Or am I misinterpreting that?

That seems like a good idea but it's simplify the issue way too much.

> but also allow all content,

Almost no platform survive without a bit of moderation. If you don't moderate, you'll get any kind of content, including spam, troll, etc...

Add that to the fact that you'll get people that will just push to boycott such kind of platforms, and thus you'll no longer have much possible ways to make this kind of platform exist.

> but if someone posts something illegal, they take their share of legal responsibility for publishing it.

That's also kind of impossible. The law evolved to consider that impossibility to look at every piece of content, and this is why the DMCA exist. Look at Youtube which try to filter their content much further than the law currently require, they have HUGE teams of moderators, multiple tens of thousands, with some of the best kind of neural network, working on this and yet it fail so often.

The world isn't binary, we need a bit of both.

It could be an interesting experiment though to allow legally the kind of platform you suggest. Someway to protect website owner from any legal retaliation. It would most probably look like 4chan, but still interesting.


> Almost no platform survive without a bit of moderation. If you don't moderate, you'll get any kind of content, including spam, troll, etc...

There is moderation and there is censorship. Nobody is against filtering spam. Trolling is fine in my opinion. Just let the user have the option of blocking who they want to block an follow who they want to follow.

> Add that to the fact that you'll get people that will just push to boycott such kind of platforms, and thus you'll no longer have much possible ways to make this kind of platform exist.

If that was true, twitter, reddit, facebook, google wouldn't exist in the first place.

> That's also kind of impossible.

No it is not. Publishers don't find it impossible. Platforms don't find it impossible. If it was impossible, telcoms and publishers wouldn't exist.

> It could be an interesting experiment though to allow legally the kind of platform you suggest.

We already had this kind of platform.

> It would most probably look like 4chan, but still interesting.

No, it would look like 2009-2013 twitter, reddit, facebook, etc.


Twitter, Reddit, and Facebook all have moderation.

> There is moderation and there is censorship. Nobody is against filtering spam. Trolling is fine in my opinion. Just let the user have the option of blocking who they want to block an follow who they want to follow.

In my experience, most people are not interested in sifting through mountains of garbage just to pick out a few morsels of a decent conversation. If you let trolls and bad-faith actors persist on your site, soon those people will be the only folks who are left.


> In my experience, most people are not interested in sifting through mountains of garbage just to pick out a few morsels of a decent conversation.

If that was true, HN would be infinitely more popular than reddit.

> If you let trolls and bad-faith actors persist on your site, soon those people will be the only folks who are left.

No. If you let users block trolls and bad-faith actors, they go away.

Once again, if you were right, twitter, reddit, facebook, etc wouldn't have grown to what they are today.


> No. If you let users block trolls and bad-faith actors, they go away.

This is so profoundly untrue that Twitter had to stop creating "egg" avatars for users who did not have them because the number of sock-puppet accounts made them block-on-sight.


> If that was true, HN would be infinitely more popular than reddit.

Not sure that was the best example. /r/programming is kind of notorious for being HN on a few-hour tape delay with a substantially diminished quality of conversation and fewer comments in general. But it's kind of a moot point because...

> twitter, reddit, facebook

All of these social networks are moderated to one degree or another. In fact, this entire post was spawned because of a Twitter moderation decision, and it is nowhere near the first time that this even happened.

More importantly, none of these social networks gained popularity because of lack of moderation. Twitter became popular because you could potentially win the lottery and talk to a famous person. Reddit became popular because Digg refugees needed somewhere to go and it had pornography on top of that. Facebook became popular because you could keep up with your buddies from college and everybody had real names and faces attached to them.


> If that was true, HN would be infinitely more popular than reddit.

Reddit communities live and die by the strength of their moderation. Sure, Reddit as a whole is mountains of garbage. But the beauty (if that's the word) of the subreddit system is that to folks who want to talk about communism, hating women and minorities is garbage, and to folks who want to hate women and minorities, communism is garbage, and they both get the experience they want.

Reddit's popularity is due to the fact that a) people have multiple interests and so they want to hop communities with low activation energy (same high-level reason that GitHub got popular over individual git hosting sites: you already have an account) and b) there is some correlation between being a "bad-faith actor" across communities, regardless of their specific moderation worldview (e.g., neither /r/GamersRiseUp nor /r/FULLCOMMUNISM is interested in V1agr4), and so "you have some karma at all, regardless of source" is a useful filter.

> Once again, if you were right, twitter, reddit, facebook, etc wouldn't have grown to what they are today.

All of these systems put work into blocking abusive participants site-wide (including real humans who are very carefully and intentionally spewing vitriol) and are increasingly automatically blocking them.


> Reddit communities live and die by the strength of their moderation.

Hence why I wasn't against moderation. I'm against censorship. I'm all for limiting "communism" subreddit to the topic of communism ( moderation ). However, I'm against the communism subreddit censoring people saying nasty things about stalin or what have you ( censorship ).

Like how politics, atheism and other popular subreddits used to be open platforms for people to express how they truly feel. Until the shift happened and they turned into censored hellholes.

> All of these systems put work into blocking abusive participants site-wide (including real humans who are very carefully and intentionally spewing vitriol) and are increasingly automatically blocking them.

No. All of these systems put into censoring people they disagree with. If truly "spewing vitriol" was the reason, then politics, worldnews, twoxchromosome, atheism and every major sub would be banned.

As long as the "vitriol" was pertinent to the topic, it should be allowed. After all, that's the point of the voting system right? If you don't like it, vote it down.

The 2009-2013 social media was great because everyone got to spew their vitriol so it evened things out. Now the vitriol is so concentrated that you have shitholes like politics and the_donald. Funnily enough, one is quarantined and the other isn't.

Moderation is okay. Censorship isn't.


I don't agree with your core conceit of delineating between moderation and censorship, but this threw me:

> As long as the "vitriol" was pertinent to the topic, it should be allowed. After all, that's the point of the voting system right? If you don't like it, vote it down.

Voting systems as implemented by many popular sites are moderation/censorship via mob rule, and I'm surprised that you advocate for it.

I actually prefer having actual moderators to having a post voted down because five random people disagreed with my opinion and wanted to hide it in a attempt to control the narrative of the comment thread.

That's a problem even this site doesn't manage to avoid. Heck, look at your posts; in this comment thread, people are downvoting you in an attempt to hide your opinion, and I don't even agree with you.


> There is moderation and there is censorship

All moderation is censorship, though not all censorship is moderation.


Pretty sure you have that backwards

No, moderation is censorship of user-submitted content by forum operators (whether owners or some other kind of host) or their agents. Censorship not directed at user-submitted content or carried out by other entities (e.g., the state) is not moderation.

> There is moderation and there is censorship. Nobody is against filtering spam.

This position has all the integrity, defensibility, and internal logic of "I can't define pornography, but I know it when I see it". Moderation and censorship are the same concept. It's just that "moderation" is metaphysically good, and "censorship" is metaphysically bad.


> No it is not. Publishers don't find it impossible. Platforms don't find it impossible. If it was impossible, telcoms and publishers wouldn't exist.

The problem is neither of them holds the middle ground.

Telecoms allow everything. Some of it is bad. Not having the bad stuff filtered is not great, but that's okay specifically because it can be filtered by somebody else. You don't need Comcast to do spam filtering on your email because Gmail can do it.

Publishers allow almost nothing. Most of what they publish is first-party. The editor of The New York Times can have their article published in The New York Times, but you generally can't.

Neither of those entity types primarily host user-generated content. Which of them do you propose is the appropriate model for moderating YouTube or Reddit?


> Which of them do you propose is the appropriate model for moderating YouTube or Reddit?

Neither? People are proposing we develop a new model that reflects the new situation we face. Sounds reasonable to me…


But then what's the new model? There are three options.

You can have a platform which is completely unmoderated, but then it's overrun with spam.

You can demand a level of accuracy which is impossible at the scale of a many-to-many communications platform (which de facto prohibits them).

Or you let people try to moderate them, have them filter out 95% of the garbage, and don't punish them for the 5% they inevitably miss.


> Or you let people try to moderate them, have them filter out 95% of the garbage, and don't punish them for the 5% they inevitably miss.

Yes? That sounds great? That's how it currently works today?


That's the one you take away if you require a choice between having any moderation at all and having a safe harbor, because without the safe harbor the 5% they get wrong subjects them to liability.

It would definitely look like 4chan. 4chan is a straightforward example of unrestricted speach. And its not a bad place, just different, but most people wouldn't enjoy being there.

Even places like this maintain a certain form of discourse by threats of bans.


Wait, what? 2009-2013 Twitter, Reddit, and Facebook were moderating content. We never had the kind of platform you're talking about. The closest was 4Chan, and even 4Chan heavily moderated individual boards. Even platforms like Gab still have moderation today.

Forums, usergroups, mailing lists, blog comments, etc... have always been moderated for spam, trolling, abuse, and just bad actors in general.

> Nobody is against filtering spam.

Repeal Section 230 and I give you 1 year, tops, before advertisers start making the case that filtering spam is censorship. After all, who decides what is and isn't spam? Advertisers wouldn't waste their time posting spam if people weren't clicking on it, so clearly the content is relevant to some people. Who are you to say that those advertisers shouldn't be able to reach their audience?


> Wait, what? 2009-2013 Twitter, Reddit, and Facebook were moderating content.

But not censoring. You could pretty much say and do anything on those platforms except for illegal content.

> Forums, usergroups, mailing lists, blog comments, etc... have always been moderated for spam, trolling, abuse, and just bad actors in general.

Which is different from censoring.

> Even platforms like Gab still have moderation today.

Gab always had moderation.

Either people haven't used 2009-2013 twitter, reddit, facebook, etc or people are pushing some heavy revisionist history here.

Reddit, especially branded itself the "free speech platform" in that time period.

I'm okay with moderation, I'm against censorship. For example, I'm all for a sports subreddit/community limiting the content to sports. And I'm for the users saying anything they want about the sports topic, even if it offends people.

See the difference?

It's funny how every response to me was by people who intentionally confused moderation with censorship.

And locking the wikileaks account isn't moderation, it's censorship.


What do you think the difference is between moderation and censorship?

Because Reddit/Twitter/Facebook in 2009-2013 didn't just remove illegal content. They removed tons of legal content too. They removed spam. Facebook removed pornography. Reddit in particular allowed individual subreddits to moderate/censor basically on any criteria whatsoever. If you went into a random forum in 2009 about dogs and started spouting nonsense about how we should all eat dogs, you would get kicked off of that forum. They wouldn't patiently hear out your controversial point-of-view.

Go back and read some of the usenet threads from this time period, there are people getting banned just as a joke; the paradigm of 'benevolent dictators' running forums was already pretty widely accepted.

What definition of censorship do you have that doesn't include removing explicit content, self-promotion, and off-topic posts?


> Almost no platform survive without a bit of moderation. If you don't moderate, you'll get any kind of content, including spam, troll, etc...

If you create your own Twitter account and post a bunch of spam there, no one will follow you; so it shouldn't matter. You can also limit someone's access to "discovery" mechanisms without deleting their content or preventing them from posting entirely so even their opted-in followers can't see their content.

The core problem is when social media companies create mechanisms for users to bother each other without knowing each other, and there "span" is the tip of the ice berg: people routinely abuse and harass other people--which should be construed very very broadly: if you had a child who died in a school shooting and someone likes to keep reminding you of it as they take glee in your pain, that is obviously abuse and harassment, and yet it isn't "illegal" and clearly isn't "spam", so is almost always considered "totally fair game"--these websites don't remove that content or punish people for it, and yet they remove photos of people breast feeding, as if that is some crime against humanity.

What forcing websites to be platforms would do is fix social media by causing the people who make these websites to reconsider features that should probably not have existed in the first place.


> . If you don't moderate, you'll get any kind of content, including spam, troll, etc..

There is a reasonable solution to this. Give people the tools to self moderate.

Things like reddit, for example, work pretty great, in that if you don't like a community, then you can go to a different one, which moderation rules that you prefer.


But how does that work as a regulation?

It's hard enough to get community moderators to sign up to do unpleasant work for no money, how do you expect to get anyone to do it after you impose liability for getting it wrong?

Actually, maybe there's something to this: Impose no liability on anybody for moderating as long as they're not moderating more than ten million users. If you are, the only way to avoid liability is to be a common carrier. Then you can actually have community moderation, but you can't have Zuckerberg deciding what a billion people don't get to see.


> how do you expect to get anyone to do it after you impose liability for getting it wrong?

I am suggesting that there would no platform wide moderation, but would instead be things like public block lists that users could voluntarily subscribe to.

IE, on Twitter, anyone could publish a "spam account list" or whatever, and people could choose whatever their preferred block list is.

Or they could choose not to follow any block lists, if they so desire.

Some blocklists might only block spammers, another might block Donald Trump, and another might block anyone who posts any swear words at all, and individual people would choose how they would like to view their content.


> Things like reddit, for example, work pretty great, in that if you don't like a community, then you can go to a different one, which moderation rules that you prefer.

The internet already works this way, if you don't like the moderation policies of a website you have the freedom to use a different website.


> The internet already works this way,

I am referring to platforms.

No, most platforms do not work this way.


>Things like reddit, for example, work pretty great, in that if you don't like a community, then you can go to a different one, which moderation rules that you prefer

What you have described here is exactly how all websites work. If you don't like the site moderation you are free to use a different site, just like on reddit.com except you change a few more characters in the URL bar.


> What you have described here is exactly how all websites work

Not within the platform, no.

It is easy to create another subreddit. And if you create another subreddit, you have full access to all the same Reddit infrastructure as every other redditor.

I am talking about access to the platform.

> Except you change a few more characters

No, you would not have access to all the Reddit infrastructure, and access to all the cross site stuff, using the same Reddit account.

It is pretty easy to move across Reddit, and within Reddit, and get all the advantages of it. You don't get all those advantages, if it is another website.

Other people can't use, for example, the same mobile Reddit app, to access your website, and would have to download a new app.

They can't log in using the same account. They can't keep track of their posting history, all through the same user link.

There are numerous examples like that. There are lots and lots of benefits to using the actual Reddit website, compared to using a different website.

So no, you cannot just create a new website, and get all of the very significant benefits that you would get, from having it all on Reddit, using the same infrastructure, and account, and mobile app, and posting history, and follower list, ect,ect ect, for example.


> Not within the platform, no.

What does this sentence mean? No what? On what platform?

> It is easy to create another subreddit

It is easy to create another website.

> if you create another subreddit, you have full access to all the same Reddit infrastructure as every other redditor

So what? There is no distinction from a moderation perspective. If you don't like how the mods run a subreddit you can use a different subreddit, same as any other forum on the internet.

> So no, you cannot just create a new website, and get all of the very significant benefits that you would get, from having it all on Reddit, using the same infrastructure, and account, and mobile app, and posting history, and follower list, ect,ect ect, for example.

Yes you can just create a new website or use a different. You don't "own" the users of reddit.com, there is no reason why one should be entitled to the infrastructure or the users.


> It is easy to create another website.

You can't get all the same benefits of having it all on reddit. Things like being on the same mobile app, and having the same user account.

> Yes you can just create a new website or use a different. You don't "own" the users of reddit.com, there is no reason why one should be entitled to the infrastructure or the users.

This is called a barrier to entry. regardless of who "owns" all of these benefits, it is still something that a person does not get, if they simply create a different website.

The missing benefits, would make "creating another website", significantly less useful, and are the ones that I mentioned before. You would not be on the same mobile app, would not have the same follower list, would not have the same user account, ect.

These are huge benefits that one would not get if they simply created a different website. Who "owns" it, does not change the true fact that these benefits are large, and you would not get them if you merely created a new website.

> There is no distinction from a moderation perspective

Yes there is. The difference is that if you create a new website, you don't get all those significant benefits that I talked about. That is the distinction that I am talking about.


Give people the tools to self moderate.

In the old days on Usenet we had “killfiles”. It worked pretty well.


Killfiles did not work well, and are part of why Usenet died. Long-time members will have good killfiles and a relatively good experience, yes, but new members start with getting every message. Building up a killfile involved a long process of deciding who was worth reading, which is a large investment when you've just started getting into a channel.

Having killfiles dictated by a central authority is completely missing the point. In the old days everyone made up their own minds.

Until Reddit decides your community is not OK and will quarantine you like https://www.reddit.com/r/TheRedPill/ or outright ban you like https://www.reddit.com/r/watchpeopledie

Self moderation only works up to a certain extend.


Oh I'll definitely agree I'm oversimplifying it, and there's room for improvement. But I think the core issue remains: if they are allowed to control discourse by choosing who can speak and who cannot, then they must be held responsible for who they allow to speak, and they need to be transparent about it.

And to your last point: as a decently frequent user of 4chan, that's pretty much what I like about it, though there absolutely IS moderation on most boards. People's minds go to /b/ and /pol/ when they think of 4chan, but there are several niche hobby communities on other boards which thrive in a (relatively) low-moderation, low-interferance. setting.


Relatively low-moderation? Half of the boards ban for political discussion in its entirety.

> if they are allowed to control discourse by choosing who can speak and who cannot, then they must be held responsible for who they allow to speak, and they need to be transparent about it.

Wouldn't this idea in essence eliminate the possibility of moderation of any kind? In order to do any moderation, every post by every user would have to be manually reviewed. This obviously doesn't scale, so in effect the law would eliminate the ability for website owners to moderate content on their own site. Also, what happens to sites like reddit and github where moderation is a feature offered by the product? How would people be able to find a moderated community if that's what they wanted? The idea seems totally unworkable.


Hey root_axis, I see you pop up pretty frequently on threads like this where people really don't understand the consequences of forcing companies to host speech and removing their legal protections if they decide to moderate. Personally I've seen the same dialog hash out so many times it's become exhausting to even reply to them.

I just want to thank you for continuing to fight for what's right.


haha, probably a sign I am spending too much time on HN, but it does surprise me to see this suggestion pop up so often, especially on this site which happens to be a quintessential example of moderation as part of the production. Imagine what dang's workload would be like if he had to hand review every one of these comments!

This obviously doesn't scale

That’s not true though. What you mean is that the economics of moderation are not convenient to achieving your desired outcome. YouTube for example could easily afford to review uploaded content before publishing it. They just wouldn’t make as much profit as they would like to.


More than 300 hours of video are uploaded to YouTube every minute, the economics of comprehensive manual review are not "inconvenient" they are wholly implausible.

You’ve got it backwards. That much gets uploaded because it’s unmoderated. The vast majority of it is junk that everyone knows would never pass any sort of quality filter, so no one would bother to upload it.

That makes no sense. YouTube has no policy against "junk" videos, there is no reason why individual users would change their upload habits except for a tiny minority that knowingly upload violating videos.

No, there is that instant dopamine hit of uploading a video and seeing the likes in real time. Whereas if it was, upload a video and 6 months later it might be approved, people would be more diligent about it. This is very obvious human nature.

That's the way the system had handled it for ages. Common carriers vs publishers. Tech companies decided that didn't apply to them, and they could have the best of both worlds.

Social media is a bit different than, say, the phone system, in its ability to widely broadcast things.

It's different from traditional publishers in that there are orders of magnitude more 'publishers', rather than, say, a newspaper and a few TV stations per city.

It's a tricky problem.


>> The world isn't binary, we need a bit of both.

I believe the point the GP is making is that social media sites play both sides when it's in their best interest and neither when it isn't. So they moderate in the name of "public safety" when it suits them, but can't catch everything so they hide behind "impartial utility" to avoid responsibility.

They aren't walking a line, they are switching sides when it's in their best interest.


> They aren't walking a line, they are switching sides when it's in their best interest.

Yeah my comment kind of mentions how that's just not in the interest of website owner thus won't happens.

Since when does business works toward anything else than their best interest? That's kind of the whole point of that.


> force social media companies to choose to identify as a "platform" or a "publisher"

Who is going to "force" them? Governments? That ask them to censor and censure people in the first place?


Governments aren’t homogeneous. In the US there’s plenty of energy (especially among conservatives) to restrict social media giants’ right to censor, at least with respect to political censorship.

This is roughly some people's interpretation of the situation in the U.S. prior to 1996, following the Cubby and Stratton Oakmont cases.

https://en.wikipedia.org/wiki/Section_230_of_the_Communicati...

The legislators who created §230 thought these incentives were bad, and wanted service operators to be able to choose to place themselves somewhere other than these two endpoints.


Did providers really have the ability to claim to be "platforms" before? If so, then I guess I am super anti- section 230 (which I have always been somewhat uncomfortable with, despite the EFF insisting that I should like it... but their arguments frankly always felt like "without this we don't have platforms", and I agree we should have platforms and would not want a world without platforms; in some sense I feel like the correct solution is force everyone to be a publisher and then you get people building distributed systems to have platforms, so no one can be said to own or control them).

> Did providers really have the ability to claim to be "platforms" before?

Possibly!

https://en.wikipedia.org/wiki/Cubby,_Inc._v._CompuServe_Inc.

> despite the EFF insisting that I should like it...

I should mention that I work there and I like §230 very much (although I'm not doing much work on it at the moment). One idea that I think is helpful here is that there are so many intermediaries that are involved in allowing you to communicate with someone.

https://www.eff.org/free-speech-weak-link

People are already applying all kinds of pressure on each of those intermediaries; with weakened §230 protections, more of them would also be threatening litigation.

> in some sense I feel like the correct solution is force everyone to be a publisher

I also like this at some level, and I remember when I had my own web site hosted on my own desktop computer. (Some of my online communications are still hosted by my friend.) But I don't tend to think tinkering with intermediaries' incentives about content is the thing that will get us there, because there are so many other practical advantages that people have perceived in the more centralized services.


What would Hacker News be? If someone posts spam comments on HN are the mods allowed to delete them? If someone writes a false comment is HN liable for that false comment?

That’s a good question. Presumably you could draw some line to distinguish between small communities and enormous ones (tens of millions of monthly active users, for example), or distinguish between social media and forums.

All that would happen is the platform and the moderation would be separated, and the moderation optional. There would be a 4chan-like hacker news platform, and hacker news modlist that hides problematic comments.

Hacker news already operates like this in a small way with shadowbanned accounts. Users can choose whether to see shadowbanned comments or hide them from view, what level of censorship they prefer.

Platforms only being platforms wouldn't turn everything into spam and screeching. It just means you'd be able to see that if you wanted to.


I only use HN clients where the default is the sensible thing, see everything.

Curious what the website uses as a default for no logged in account/new accounts - anyone fill in the blank for me?


Showdead is off by default. You can test it in private/incognito mode.

Welp, there goes the respect I had.

Censorship by default, not just for crybabies who are afraid of reading things they don't like.

Depressing. But, that's what I've come to expect on this channel.


> If they choose "platform", then they can take no responsibility for content posted, but also allow all content, and only remove content when it is required of them by the legal system (when the content is illegal and has been reported as such)

If you require the content to be illegal for a platform to remove it and there is a consequence for being wrong, that's a problem since the determination outside of platform competency, which encourages them not to act even against likely-illegal content.

Section 230 was adopted specifically to address this problem, that attempts to constrain socially undesirable content (even if aimed at illegal content) risked making online providers liable for all content under the model applies to traditional media which was viewed as not functioning at web scale, at least without inhibiting innovation greatly (of course, it's pure coincidence that Section 230 support is eroding when there are now huge incumbents that have an advantage from drastically making it more expensive to scale up new challengers.)


Add a toggle switch to enable the content. Bam, now you have a free platform and for those who don't want to see the content they disagree with now don't have to.

You even give an incentive to create an account.

You literally cannot please everyone so trying to is a lost cause.


You wrote that comment on a moderated, zero revenue forum! You really want HN to be liable for every post here? Where would you post that opinion if they shut it down instead?

Seems like one could craft policy that treats social media giants differently than relatively small forums, no?

Can we assume you’ve never moderated any forum, or the comments on any web site?

Any site owner automatically removes thousands of spam comments. But these aren’t illegal. Your system would either:

1. Force sites to publish all legal spam, or

2. Face liability if anyone sneaks illegal content in. In practice all sites would need to censor all comments until they passed human approval. At big sites would probably need approval from lawyers.


Platforms would need to allow themselves to be DDOS... Not gonna happen. It won't stop alt-right talking heads with no clue about the world to promote their "platform/publisher" agenda.

So if I run a large chess forum, and want to delete posts that aren't about chess, I would have to choose "publisher"?

And if some user X posted something about a dispute they were having with chess tournament organizer Y, and Y felt this was untrue and decided to sue X for libel, I'd also have responsibility for that alleged libel because I'm the publisher?

Before anyone says that this is a ridiculous scenario, that's pretty similar to that happened with Prodigy's bulletin boards before the CDA. Prodigy had content guidelines for their boards, moderators that moderated to enforce those guidelines, and software that screened for offensive language.

Someone posted a message that someone else felt was libelous, and that someone else sued both the poster and Prodigy. The court ruled that because Prodigy enforced content guidelines and had filters for bad language, they were a publisher and responsible for the content of all the messages.

It was that case that was one of the main inspirations for the CDA.

Before that, the way the case law was shaping up there were only going to be two options for an online forum.

1. If the forum does not want to be liable for what its users say, it cannot moderate or restrict their content, except as required by law.

2. If the forum places any restrictions on content, then it is essentially in the same position as a newspaper or a book publisher, and to keep its legal risks under control it is probably going to have to do a similar level of fact checking.


A law like this could only apply to websites operating at a certain scale, exceeding some revenue threshold, owned by publicly traded companies, or some other proxy to get only the sites we’re talking about here. This is just off the top of my head too, I’m sure much smarter people can craft a framework much better than OP’s which is already a very good one.

Yes, I would like the recourse of suing you for libel in this situation.

And if you would like to protect against that risk you should consider including a "flag" button or something on each post where I, member of public, can get ahold of you personally, without needing to create an account.


Ok, so the situation right now: you make a web forum, and delete posts that are about your friends doing bad things but leave posts about that organizer because you don't know then and don't bother to look into reports that it is all, in fact, libel, because that's work and you aren't liable.

Meanwhile, you take down posts of people talking about how the tournament did not offer gender neutral bathrooms or how there aren't enough female chess masters, because that's "political" and off topic, but then turn a blind eye towards people harassing women on the forum and require them to keep begging you with the flag buttons to clean up your mess.

The result of this is just that the powerful majority gets something akin to a working filter and all the marginalized people don't just get lip service to their issues: they end up being unable to even bring up their complaints, because those complaints are divisive and often themselves against some terms of service banning meta-conversation (a common enough issue that this website--Hacker News--has such an indefensible clause).

This is all just not OK: you don't get to have immunity and have control. The idea that you should be able to restrict some of what happens but somehow not be even sort of responsible for everything else that happens is the kind of position you can only hold if you are part of the powerful majority, whatever it might be (cishet white males, corporate stooges, western prudes... take your pick) :/.

So, I guess, "congratulations tzs": you are demonstrating loud and clear that you think the legal position of running a chess forum that actively distorts the world view of everyone using it is more important to defend than the rights of groups like breast feeding mothers (a group that is routinely thrown off of websites and deplatfored even as alt-right groups are empowered).

(Meanwhile, the entire premise that people should get to sort of look like a platform without really being a platform is robbing the world of actual platforms, and it is ridiculous: the existence of things that almost work like platforms for the majority of powerful people means that getting people to tolerate the pain of using fully distributed systems is by and large off the table :/. Either these large websites should offer real platforms or they should be forced to not pretend to do so, so we can actually launch and use distributed systems instead.)


The danger here, it seems to me, is that once you make it clear that the people saying that there aren't enough female chess masters have a legal right to be on this forum and cannot have their posts thrown out as "political," the people saying that men are naturally more suited to the game of chess and women should get back in the kitchen will rub their hands in glee and realize that they now have a legal right to be on every other chess forum, because their posts, too, are merely "political" and not libelous. They know how to toe that line - doing so a core competency of internet trolls (see also "hide your power level"). That is, the tradeoff of saying that the powerful majority no longer gets to regulate forums and must give the marginalized a voice is saying that the marginalized no longer get to regulate forums either and must give the powerful majority a voice - and the powerful majority is, after all, powerful, and will take advantage of this too.

So the practical result is that only people with the resources to moderate their forums to the point that they're willing to accept liability will run moderated forums. That requires either significant resources to run the moderation program (and a chilling effect on what gets approved) or significant resources to deal with liability and American-style lawsuits. The breastfeeding mother who's setting up a blog for her three closest friends to comment on will technically be fine, but she can no longer grow her community beyond her three closest friends. The powerful (whether that's "big companies" or "socially powerful races" or whatever) are happy to keep restricting what happens on their platform and take on liability because that liability will affect them less.

I don't have a solution for this, and I find your argument mostly compelling, but I don't know that it gets us where we want to be either. I'm not really comfortable with "The price we pay for keeping MRAs from derailing forums is allowing people to say that feminists are derailing forums," but "The price we pay for making sure feminists can speak is making sure MRAs can speak" feels equally uncomfortable.

(BTW - I upvoted both of you because I think you both made good points.)


The solution is to anonymize and decentralize the internet, and we've been saying it for decades.

That didn't make the star-chans better; quite the opposite in fact.

It did from a certain point of view, which is kind of an unspoken point. Many of the people who argue that allowing anything but absolute and unfettered freedom of speech is a slippery slope towards fascism are not doing so from a neutral or academic point of view. Rather, they want to push the Overton window of societal acceptance for ideas currently considered intolerable by mainstream society by guaranteeing exposure to certain forms of political speech and propaganda.

The technical underpinnings of the internet are decentralized well enough. The reality is that these aren't technical problems, and any technical workaround will ultimately become circular. We have to come up with the necessary legal and social innovations that will allow people to cope.

It is already both (for the most part). The problem is that people will always prefer centralization. From shopping malls, capital cities, credit cards... to Internet platforms, the unavoidable fact is that centralization makes things easier, cheaper and more convenient.

Even federated-by-design technologies like email have slowly turned into mostly centralized services. Just having to pick an email provider is too much work for most people. Gmail is good and free, so everyone uses that. As long as servers cost money and big corps can offer free things to get user share, decentralised services are doomed to never become mainstream.


That's a slightly different solution from just decentralizing the internet, which 'saurik advocated above. (I'm in favor of decentralization on principle, to be clear, I'm just not sure it solves this problem.)

The problem with anonymizing the internet is that it prevents forming real-world communities. If you're forced to remain hidden so that you escape scrutiny, the breastfeeding mother has no way of finding others in her city beyond her three existing friends, because none of them want to be known as the dissident who supports breastfeeding. There's still value to the online communities, yes, but this seems like a significant abdication of the power of the internet to help society.

The problem with decentralizing the internet without anonymizing it is that any legal restrictions remain. If there's a law that says that Twitter can't kick breastfeeding mothers off their website without taking legal responsibility, that same law will say that my Mastodon instance must federate with every alt-right-but-toeing-the-line Mastodon instance if I don't want to take legal responsibility for everything that shows up on my Mastodon instance.


>...that same law will say that my Mastodon instance must federate with every alt-right-but-toeing-the-line Mastodon instance if I don't want to take legal responsibility for everything that shows up on my Mastodon instance.

This seems highly unlikely to me. What if you are running a different version of the codebase or something from those instances? v2.0.1special. Even leaving aside constraints like that, volume of inbound traffic, etc. are direct costs to federation with other instances that I doubt laws would be willing to force you to pay.

I wonder if there are examples of this having been done in the past. Microsoft's antitrust case is the closest that comes to mind, but they were operating from an absolutely dominant standpoint at the time. If Mastodon/ActivityPub ever reaches that level, I'd consider it...a win?


The law doesn't have to force you to pay it - it just has to say, either federate with everybody or nobody. All it needs to do is say, if you're accepting posts from other people, you need to do so in a non-discriminatory way.

(Providers like Twitter would love this law because it would kill decentralized systems. In fact this would allow Jack to "decentralize" Twitter https://twitter.com/jack/status/1204766078468911106 while making sure no decentralized systems run by the powerless can compete. Everyone else can technically run a decentralized system, but Twitter is the only one worth using. It's the perfect regulatory capture.)

Sure, you can probably get by for a bit by just happening to fail to peer with Nazis, but you would immediately lose the ability to make a public shared block list with reasoning, a la https://github.com/dzuk-mutant/blockchain/blob/master/list/l... . You'd also have trouble successfully peering with non-Nazi instances - if you publish the protocol changes in v2.0.1special so other reasonable people can federate with you, the Nazis will just apply those patches too.


Do you have a clue what MRA's truly are?

No replies, just negs. I assume at the moment that you have a wrong conception of MRAs and are attributing the MRA label to certain other types of groups.

Isn't the powerful majority you describe one any individual can enter into by becoming the administrator of their own site?

The scenario you describe sounds like it makes for a pretty terrible chess site, and I bet people would like an alternative.


The only solution is to kill all small independent boards and only allow everything to be handled by Facebook et al who are sufficiently armed with lawyers.

Death to independent free speech publishers.


> you think the legal position of running a chess forum that actively distorts the world view of everyone using it is more important to defend than the rights of groups like breast feeding mothers

Wait, are you claiming chess forums are known to often discriminate against breastfeeding mothers? Or did you quietly jump from the specific to the general case? If so, are you saying Internet communities tend to discriminate against them? Hell, the closest I've seen is people IRL complaining about breastfeeding on public transit/other small, closed, public spaces. Justified or not, how is this infringing on any rights and how is any of this related to the Internet? How would one even know someone is a breastfeeding mother on a pseudonymous mostly text-based platform?

> The idea that you should be able to restrict some of what happens but somehow not be even sort of responsible for everything else...

But is that the case? If you don't restrict porn etc., you get hit by children's safety laws. If you don't remove ISIS videos you get hit by anti-terrorism laws. Same for hate speech, although I believe you're liable only if you refuse to take it down after being ordered to. (basing this on a mix of US and EU laws since I forger which are which). These things apply to "platforms" as well. But besides that, you can't have a perfectly neutral platform anyways. If you want the platform to survive, it needs to be usable. A chess forum that's full of politics is not usable and neither is a cooking forum full of nazis.


> And if some user X posted something about a dispute they were having with chess tournament organizer Y, and Y felt this was untrue and decided to sue X for libel, I'd also have responsibility for that alleged libel because I'm the publisher?

What's the precedent for TV and books? Are the TV networks or book publishers included in libel lawsuits? Not answering your question either way, just gathering information for now.


This depends on jurisdiction.

Books aren't user-generated content – everything in a published book can be easily vetted.

Neither is TV, for the most part. You have guests on talk shows who you can't control what they say, but you are also specifically choosing those people, so placing the blame on the publisher (TV Network) in that case makes a fair amount of sense.

With an online forum where anyone can sign up and start posting, you don't have that.


So if I run a large chess forum, and want to delete posts that aren't about chess, I would have to choose "publisher"?

No. Any such law could easily have provisions that exclude moderation for off-topic content (which would be defined carefully so as to include spam).

It would mean you can't delete "trolling", "bullying", on-topic flamewars and other categories of things you might want. To do that you'd have to indeed become a publisher. And then you'd need to respond to complaints quickly and take down libellous content, or become responsible for it.


Great! I'll just run a forum on "everything chess that isn't trolling, bullying, or flamewars".

Very much yes. If you have time to go through all posts to make sure they're about chess, you have time to go through all posts to make sure they're not libel.

Maybe the model for internet forums should be nonprofit private clubs, rather than profit-making enterprises. If all members bought equity and a voting share in the administration of the forum, there would already be legally established means of determining liability for acts committed within the club. Is collective ownership of a commons such a crazy idea? You could still make your money as a subcontractor of the club providing technical services for hire, but instead of being a (benevolent) dictator for the users, you would be subject to the users. Shifting ownership would shift liability.

The essential ethical problem of the current situation, even from the days of people complaining that Youtube's primary business was illegally sharing copyrighted contet, is that liability was being shed to the users while all profits were going to the owners. Maybe that should just not be possible in future.


> If you have time to go through all posts to make sure they're about chess, you have time to go through all posts to make sure they're not libel.

I can tell if a post is about chess with just a quick skim to a very high degree of accuracy.

Telling if a post is libelous will often require in depth investigation of its claims, often requiring emailing or calling people involved in whatever the post was about and tracking down and interviewing witnesses.


Not only that but someone can sue you even if something isn’t libelous. They just have to believe it is

> Very much yes. If you have time to go through all posts to make sure they're about chess, you have time to go through all posts to make sure they're not libel.

Relevant criteria is not whether something is liable. It’s whether someone might sue because they think it’s libel. At minimum you probably need a lawyer to evaluate claims, and even then you’re not sure you’re safe. When a group has assets it becomes more likely target.

> Is collective ownership of a commons such a crazy idea?

Legally yes. There is no way to shift liability from the club to an individual member who made a post under the model you’re describing. Liability would be for the site owner i.e. the whole club


> There is no way to shift liability from the club to an individual member who made a post under the model you’re describing. Liability would be for the site owner i.e. the whole club.

If they only let rich people into the club it might work. The club could require members to agree to contracts that require the member to indemnify the club for any costs and damages associated with any lawsuits against the club over that member's posts.

Technically, that doesn't really shift liability to the member but if the member is going to pay the club's damages and the club's legal costs it's almost as good. That only works if the member can actually pay, hence the "only let rich people" in part.


True. That leads to adverse selection though: the clubs self select as juicy targets laden with assets. Further, only the rich can speak.

I doubt those making these proposals upthread want these consequences. For whatever reason people really don’t think their proposals through. (Not referring to you)


You seem to be confidently supporting a case that this is an impossible structure because only the rich would be allowed to speak. You don't seem to be making any distinction between the situation the club would be in from the situation that individuals are currently in.

The club being liable is no different from an individual being liable, which is the current situation, other than that the club has the ability to dissolve. Again, if libel law is broken, fix it. Somehow your alternative to the unthinkable outcome of only the rich being able to speak is to exclusively immunize the rich from the consequences of speech that they have direct and complete control over.


People don’t sue random Internet commentors for libel.

But imagine a form for them exclusively rich people. They have all taken out and Schoenes that will cover lawsuits of indefinite liability. Millions or billions. And if you find anything suitable on anywhere in the form you are guaranteed by the structure of the club that someone will make good to pay it if you win.

That’s the structure that you were arguing for. It’s a strange structure.

But I am not arguing that only the rich can speak. Regular users don’t face any real risk of libel for Internet comments.

The risk would arise for the private club because it has a target on his back: the money is here.

As a reminder the reason regular people couldn’t speak under your proposal is that in normal cases of form would itself be liable for our comments. So it would be uneconomical to run a forum due to the legal risk


> Liability would be for the site owner i.e. the whole club

I don't see a problem with that. The club should have vetted their members better, and since it didn't, it should be dissolved and its assets handed to the wronged party.

If libel law is broken, fix libel law. If a site distributes misinformation (that is not protected by law) that harms someone, someone should be responsible.

Indemnification of massive tech companies and not individuals is just a giveaway.


Sounds like this decision may be made for them, see previous discussion on one method which may be used to ban E2E https://news.ycombinator.com/item?id=22202110

First question: what would you make different between your proposed system and the (somewhat notorious) DMCA claim system that we have right now? Do you think that the DMCA claim system is a good model for this? Or are there additional safeguards you would put in place to stop platforms from just removing anything they get a request for? Would you make filers liable for frivolous takedown requests? If so, does that mean that takedown requests can't happen anonymously? We already have a system in place where anybody can request anything get taken down for copyright reasons, and unsurprisingly, it is widely abused for censorship.

Second question: when you see platforms like 8Chan taken down today, do you think that's wrong? Should companies like Cloudflare be forced by the government to leave those sites up, even when they're advocating for completely despicable things or crossing a line into outright threats? If Cloudflare isn't going to be forced by the government to leave those sites up, why would anyone willingly host the platforms that can't do any moderation at all, even of open hate speech, rampant spam/scam advertising, pornography, or borderline illegal stalking/threats? To push that question a step farther -- when you look at GMail's auto-rejection and sorting of spam, do you think that's wrong?

Third question: do kids get to have user-generated content hosted anywhere? They can't realistically go on 8-Chan, so is the idea that they won't join forums until they're old enough that they feel comfortable in environments filled with Nazis? Bear in mind that this includes communities like Miiverse. Nintendo isn't human-moderating every Miiverse post before it goes public. There's no world where they willingly open themselves up to the kind of liability you're talking about, those platforms would just be shut down.

Final question, and I do genuinely mean this as a question, not as a dismissal or as a request to go away: why are you on Hackernews right now, given that Hackernews fits very squarely in the category you're saying shouldn't exist? Hackernews is heavily moderated (way more heavily than Twitter), but it also allows people to post crazy stuff. The moderation mostly happens after the fact, which would open the owners up to liability. Would Hackernews be a better community if submissions/posts weren't moderated? Would Hackernews be a better community if every post you made went into a human-moderated queue, and you had to wait (at least) 2-3 days before it became publicly visible? Bear in mind, Hackernews is largely moderated by something like 3 people, so even in the best case scenario they're definitely going to need to hire more and get a bit more aggressive about advertising and fees.


More protocols, fewer platforms. Significantly reducing the money in web advertising and especially the monetizability/utility of data by and about users is probably necessary for that to happen.

It's not for technical reasons that we stopped having a smallish number of viable standards with many implementations each, and started having many wholly incompatible and deliberately non-interoperable implementations of basically the same thing. I doubt Email could be invented today.

[EDIT] point is capturing and controlling communication is currently a top priority of big tech players, and that won't change until it is, one way or another, no longer highly rewarded—until then, every no-funds or paid-for-by-actual-users solution is competing with bottomless pockets and "free".


Yes, yes, yes. There needs to be more emphasis on common protocols. This space is much less vibrant in the, say, past 15 years, than before that. If we exclude cryptocurrency I can not think of any common, groundbreaking protocol introduced since Bittorrent. I am probably missing some, but in the 90s and early 00s protocols were where the exciting stuff happened.

I think one reason this might be the case is because modern cloud-computing makes scaling a single centralized service super easy and straightforward.

In the early 2000s if you wanted to build something "at scale", the only real way forward (besides raising tons of money upfront) was to build a decentralized protocol. Offload most of the compute, bandwidth and storage to the end nodes.

Nowadays, you can just an AWS auto-scaling group behind a load-balancer and a CDN, and you can pretty much handle infinite traffic.


This. If a company wants to to be a platform and enjoy certain protections from responsibility for content posted by its users, it should be required to open the protocol and the data and prevented from exercising absolute control over the user interface and from being able to deny access to some organisations, 3rd party clients, etc. If that kills a bunch of business models, do be it.

The platforms have been muzzling speech for a long while and some through sneaky ways like making content hard to find or surface. So they can still restrict the spread of speech while claiming they do not interfere in the creation and publishing of it. So platforms will have to come clean on their algorithms and content discovery process as well.

I don't think any individual knows anymore, it's just ML optimization for getting the most ads, presumably through engagement.

Except we know this isn't true. All of their systems have key word input with manual curation for suppression/boosting. Google maintains blacklists for news etc etc.

journalists and the press should be taking a long hard serious look at running their own ActivityPub installations.

Somehow to me the problem is that social networks are turning into implicit mobs, and that's surprisingly scary. The information highway became a confusion boulevard.

The solution is not to rely on for-profit corporations to host your content for free.

Imagine a world where any average citizen could host their own web page on the internet.


I wish that we're still viable, and I want to return to that world, but it's pretty much impossible to compete with the reach a major platform gives today. It's possible that a publisher/platform legal split would do enough damage to our current major platforms to push us back in that direction.

> I wish that we're still viable

It is perfectly viable, it is now cheaper and easier than ever to host your own website; people do it all the time and sometimes even host their own platforms.

> It's pretty much impossible to compete with the reach a major platform gives today

I don't see why anyone should be entitled access to "the reach of a major platform". There are many thousands of "non-major" platforms thriving all around the internet that service many different needs and communities; this system works.

On a personal level, I don't see the idea of guaranteeing access to social media websites as a worthwhile use of government resources, certainly, I think if we are using the government to force companies to offer free service to everyone no-questions-asked, there are more pressing needs that this level of government intervention could fulfill before even considering stepping into the realm of social media.


The solution is easy: full data portability laws that "backs up" into a centralized database (could be blockchain to avoid manipulation) - and then each platform sets its own rules, and users then can utilize the platform of choice for their UI and UX - design and governance.

This will not address the loss of audience that happens when Wikileaks followers are using Twitter for updates. You must own the interface to your audience, not just your data.

Maybe not the first time a specific type of offence occurs - but each bad action of a controller/leader will result in a migration the size reflective to the proportion of each offence. I don't disagree that people will benefit and will be necessary that people have their own, non-centralized interface.

The fediverse seems like a good start. Wikileaks should host their own instances of mastodon/diaspora etc, and we should stop participating in these centralized commercial enclaves expecting them to prioritize respecting and protecting our free speech on their web sites.

Using Twitter is a privilege, not a right.

On the opposite, Twitter having the opportunity to earn tons of money from a public good as the Internet is a privilege, not a right.

Completely wrong. The Internet isn't a public good. The Internet isn't even really a thing. It's just a bunch of interconnected networks that are mostly privately owned.

The solution that is within everyone's reach is to choose to use simple blogging as platform for communication. The standards (RSS, HTML, HTTP etc) are open and if you keep the domain name within your control you will be much less vulnerable from de-platforming.

barrier of entry too high and ease of use too low. it’s like saying we should write letters to avoid the telephone being tapped.

Well if the telephone is tapped, writing letters will help.

Remember when RSS was a thing? It was actually pretty great.

It's still a thing. It's still great.

Agreed. But who is pressuring these social media companies to censor. Maybe something needs to be done to stem the absolute power of those pressuring the social media companies.

Lets be honest here, twitter, reddit, google, etc didn't decide to censor all of a sudden. They were pressured into censoring. By whom?


[DELETED]

> That's not what we have in the US today—but we should

It'd be a pretty huge encroachment on freedom of the press.


The solution is simple: self host your most important messages.

mandatory post shilling for ssb (secure scuttlebutt) and other fully decentralized means of communications.

One approach would be to hold social media platforms to the same standard as the government in terms of user rights like freedom of speech - regardless of their terms of service.

There have been a few court rulings in Germany that go into that direction. Facebook for example was disallowed to delete some posts/sites which violated Facebooks "standards", but were permissible under the freedom of speech clause of the German basic law. However thus far courts haven't really formed a consistent opinion and also ruled the opposite. As Facebook didn't appeal the courts rulings, it hasn't gone to the federal or constitutional court level yet.


Doesn't WikiLeaks own and operate their own website? I don't think they're censored; I think a private company stopped giving them an open platform to spread their info for whatever reason.

A large world renowned media organisation was banned from another large world renowned media organisation. They can sort it out between themselves.

We could go back to the days of personal blogs.

Stop using Twitter. Why is that not a solution?

There is nothing forcing someone’s computer or smartphone from connecting. Just stop using it.


If you stop using Twitter you stop being able to communicate with people using Twitter. There are many organizations who would love nothing more than to stop using Twitter, but they cannot because Twitter has a captive section of their audience. And it is a huge section of Wikileaks target audience, which is why this is a problem.

I think that decentralization has to be the solution. Having a handful of websites run by megacorps was not how the internet was meant to work. The fediverse around ActiviyPub is looking very promising. Mastodon is the most prominent service using it, and it has millions of users now. The experience is strictly superior to Twitter in my opinion. And there's no single central server on the network making it much harder to censor and manipulate Mastodon than centralized networks. There is no single company deciding what content can go on the network, and servers are hosted by regular people across many different countries.

Mastodon also allows for account verification, where the account can reference a specific website that's owned by the user. So, if a reputable source wants to have a verified account they can link it up with their site.


I've mentioned my similar thoughts in another comment on hackernews but will repeat them here.

I think ActivityPub & federation looks like promising due to cost of cloud hosting going down every year. Thus allowing self-hosting to become easy and affordable for a lot of people.

However there are other trends that need to continue for federation to succeed.

- Continued competition in cloud hosting market not just in US, but globally

- Automation & simplification of monitoring, upgrades, and security patching

- Reducing the complexity of the federation software. Mastodon and Pixelfed are great, but I feel like the feature creep can make them difficult to maintain while self-hosting. Less software complexity = less upgrades, less package dependencies, less security issues


I'm going to make a bold prediction, but I think at some point within the next 2-3 years you'll see one of the incumbent commercial social media services white-labeling their application for organizations, much in the same way Gmail, Google drive, Google docs, etc is white-labeled as G Suite. It will probably implement the necessary federation protocols (ActivityPub) and provide an on-ramp to the fediverse for those not looking to "self-host" per se.

I'd use mastodon more, but the people I want to follow are all on twitter, is there an easy way of forwarding tweets from twitter to mastodon, or setting up an account that follows twitter users?

That's always the problem with novel networks, reaching critical mass.

Though I can't see why a #deletefacebook sort of campaign couldn't snatch users away from the incumbents. It might just need enough influencers to switch exclusively to the new platform.


#deletefacebook has been great for Mastodon, it's currently at over 2 million users and growing actively

The Fediverse is approaching 5 million users across its various implementations; the Mastodon software itself hosts about 3.8 million with nearly 2800 servers:

https://fediverse.network/mastodon

This is a genie that can't be placed back inside the bottle. With ActivityPub being a W3C recommendation, things are looking fantastic for this ecosystem.


Yeah that's fantastic growth. I completely agree that the genie is out of the bottle now because with so many users there's a sufficient population of technical users who will develop, maintain, and host ActivityPub based services. And things will only keep improving from here on out as more people keep joining the fediverse, and in turn making it more appealing for new users.

I wrote a bot to mirror Twitter/Tumblr/RSS to Mastodon

https://github.com/yogthos/mastodon-bot


It seems like there's an even simpler solution that doesn't require bootstrapping a totally new network.

To me, it seems like Twitter serves two closely related but ultimately distinct core purposes. One is publishing. It basically hosts content on a globally accessible network. Two is identity. A person's Twitter handle is an authoritative record of their identity. A person knows that any tweet on the @wikileaks handle came from someone Wikileaks authorized to use the account (barring some sort of hack).

Twitter's censorship and enforcement ability mostly relies on the identity side. It bans you from continuing to publish under your previously, globally known Twitter handle. Creating a new sock puppet account is trivially easy. Even the best of IP bans is clunky and easy to work around. We all know if Wikileaks wanted to open a new handle, there's really nothing Twitter can do to preemptively stop them. They can only react with a game of whack-a-mole.

The problem is that this poses a coordination problem between publisher and consumer. Wikileaks' readers have to somehow discover and verify Wikileaks' new handle/identity. But think about how this dynamic changes if the identity function is off-loaded from the publishing function.

Imagine an unauthorized client that overlays on top of vanilla Twitter. Instead of subscribing to Twitter handles, you subscribe to cryptographic identities. Participating Twitter publishers cryptographically sign their tweets, proving their identity. The overlay client regularly scans the entire site's feed to discover any new handles using a known signature. If Twitter Inc. bans your handle, just fire-up one of your sock-puppets and all your overlay subscribers seamlessly re-point to the new handle.

The best part is this approach is backwards compatible with vanilla Twitter. You can keep using your pre-existing Twitter handle, and vanilla Twitter subscribers don't see any difference. But if you're afraid of potential censorship, you can encourage readers to gradually adopt the overlay system.


What stops Twitter from installing the client and using it to auto-ban any new account that has the same cryptographic key?

Also, if you have an overlay network to distribute mappings from keys to Twitter handles, why don't you just add the ability to distribute tweets and cut Twitter out of the picture?


You are trying to solve a social and legal problem with technology.

We all know if Wikileaks wanted to open a new handle, there's really nothing Twitter can do to preemptively stop them. They can only react with a game of whack-a-mole.

And that's what they do. Avoiding a ban by creating a new account is already against the terms of service.

Imagine an unauthorized client that overlays on top of vanilla Twitter. Instead of subscribing to Twitter handles, you subscribe to cryptographic identities. Participating Twitter publishers cryptographically sign their tweets, proving their identity. The overlay client regularly scans the entire site's feed to discover any new handles using a known signature. If Twitter Inc. bans your handle, just fire-up one of your sock-puppets and all your overlay subscribers seamlessly re-point to the new handle.

So then Twitter (or a motivated investigator) runs the overlay network as well and gets the new account automatically identified.


Everything needs to move to decentralized platforms.

Unfortunately AFAIK there aren't any decentralized platforms that are ready for mass audiences. Techs like IPFS still have a long way to go before they can be reliably used for Twitter or Facebook-sized amounts of data.

Decentralise. That's the solution.

We need to get back to the pre-Bork interpretation of anti-monopoly regulation (where demonstrating extreme market dominance in and of itself was quite sufficient), and proactively enforce it. The problem with Facebook, Twitter etc is that they simply control too much of their respective market segments, and so their censorship decisions affect too many people. If something is "too big to censor", it simply shouldn't exist.

No, it's not a problem. WikiLeaks is free to post whatever they'd like on their own platform, which they already have.

We need to handle the root cause. In case of US companies it has to do with anything the patriot act brought — something which should have been a temporary "boost" turned out to become the new status quo. Nobody will voluntarily cut these new powers of your services and it breeds a climate of anticipatory obedience within US companies that will be hard to get rid of even if the patriot act isn't extended again.

This problem can only be solved by electing fundamentally different actors into office.


In what way can any social media company "censor" someone?

I don't read Twitter much. I still get plenty of information from Facebook, Podcasts, Hacker News, random blogs and websites, news websites, Discord chats, YouTube, etc.

Even if Twitter decided to kick out someone, all of those other avenues are available to them, some of which aren't under the control of almost anyone.

It feels to me like we're living in the age where it's absolutely easiest to "publish" anything at all you want, and yet people are constantly complaining how hard it is. I just don't get it, it seems factually wrong.


Agreed in principle, but it does show the Twitter moderation to be completely arbitrary. Given, this is not the only case.

Not sure where they are going with this, but they probably will blame the Russians.


Generally they shadowban you, which means you can post and interact normally so you don't get angry, but no one can see your posts. The algorithmic feeds also prevents many people from being heard when they need it most, such as in times of mental or physical suffering. They know you would rather see smiling people on the beach than suffering people.

I don't see what responsibility social media companies have to host anyone's voice. They should be able to arbitrarily censor who they wish.

The issue comes from their massive reach and people's reliance on them for information. But this is more rooted in the monopolistic power companies like Twitter, Facebook, and Google wield, and should be fixed accordingly. Not by limiting private entities' power to decide whom they want using their service.


So you're saying it's not a technical glitch? /s

ActivityPub. Open Protocols. re-decentralize the web

Population of US: 330 million Twitter US MAUs: 68 million % of Bots on Twitter: ~ 15% So Twitter users that really matter: ~ 57 million Way overrated.

This might be a good moment to remind people that Assange has been the victim of a successful smear campaign since the moment a woman did NOT make a rape allegation against him in Sweden. Nils Melzer, UN special rapporteur on torture, actually went over the original proceedings and gave a long format interview on the matter lately. He too took some convincing to even touch the case, that's how effective sexual violence as a smear tactic is. Which is a damn shame, given that there's plenty of real sexual violence to go around, and which now once more gets harder to make visible.

Here's the interview: https://www.republik.ch/2020/01/31/nils-melzer-about-wikilea...

tl;dr:

Assange was right about everything that has been made falsifiable: secret indictment, spying on his lawyers, extradition to the US, no fair trial, the whole lot. He's showing every mental and physical sign of being tortured. He needs help, and he's being denied the very rights that western society purports to uphold.

EDIT: and that is leaving aside the whole dimension of Sweden, the UK, Equador, and of course the US making (even more of) a precedent out of him. What this means for the free press, and the relationship between freedom (of expression, of information, and in this case of truth, really) vs national "security" (in this case, the national security to not be held accountable for shooting down civilians) should send shivers down everyone's spines.

EDIT 2 To the people downvoting: care to express which part of the above exactly you disagree with? Do you think it's factually incorrect? How about correcting it then? Do you think it's inappropriate? Why so? Off topic?


I have no idea why this is getting downvoted. Odd.

Well, Assange does stir emotions in many. That makes him such a hard topic to discuss, even if from a (human and press) rights perspective, the whole thing is pretty cut-and-dry.

Indeed. Assange is an asshole. A lot of people that have tried working with have conformed that. But that doesn’t mean he hasn’t been successfully smeared (he has), or that hes contribution to journalism should be ignored (it shouldn’t)

Assange isn't an asshole. Assange is a victim of torture.

False dichotomy. And being a victim of torture can in no way be defended by "he's an asshole."

When being or not being an asshole became something of importance regarding the issues at stake? The dichotomy might be false but you're missing the whole point.

I don’t see “being an asshole” to be the issue at stake here at all.

It is for sure an issue for people that are trying to work with him. It is understandable that people bring it up when conversation revolve around him. But it is completely irrelevant when we are talking about his contributions to journalism being censored.


> Assange is an asshole.

The smear campaign continues to be successful even in people who consciously know it was all lies.

The human psyche is pretty "amazing".


Some people are assholes. Some people are smeared with crimes worse then “being an asshole”. Julian Assange can belong to the intersection of the two.

Not everything bad ever said about Assange is a lie.


The people smearing Assange frankly come across as the asshole to me. Assange may not be the most pleasant person ever, I have no idea since I've never met the guy. But an asshole calling someone else an asshole makes the allegation lack any real credibility

for some reason, a LOT of people here do not like assange/wikileaks anymore. which is super weird imho.

I would assume some associating it with Trump is behind it.

I suspect it's a result of the apparent politicization of Wikileaks. When it first appeared it seemed fairly neutral, but that neutrality appears to have vanished somewhere along the line.

but wikileaks was always political.

i mean, we all remember collateral murder and the amount of stuff leaked re: gwb and the iraq war.


>When it first appeared it seemed fairly neutral, but that neutrality appears to have vanished somewhere along the line.

Wikileaks has always been as neutral as reasonably possible, given that it consists of human journalists and their human sources, not emotionless impartial machines. They have an excellent accuracy rate. The only thing that changed is that their honest reporting became embarrassing to wealthy and powerful people closely associated with the Democratic party. The DNC leadership loved Wikileaks' reporting on governmental malfeasance prior to late 2008. What changed in late 2008 was not Wikileaks' demonstrated neutrality towards reporting on governmental malfeasance, but the color of the lawn signs of the victorious president.

I suspect there's also some deep resentment from the traditional J-school cohort, who have spent over a decade being embarrassed by outsider Assange and his colleagues. Wikileaks has consistently make them look like obedient stenographers to the powerful, not principled journalists willing to speak truth to power. Wikileaks has publicly exposed many in the media as being too afraid to break real stories for fear of government legal pressure and a reduction in corporate advertising dollars. Raytheon, Lockheed-Martin, and Boeing don't advertise their military hardware on the cable news networks because they're trying to sell cruise missiles and fighter jets to the folks at home. They do it to hold something over the cable news networks' execs' heads when embarrassing stories might come to light.

Combine the collective resentment of the leadership of both major American political parties and their connected donors, the national security apparatus, the military-industrial complex, and every mainstream journalist who's getting scooped by an Australian ex-black-hat and his friends who (gasp!) never even attended Columbia, and we end up with a vicious and unfounded anti-wikileaks narrative festering within the politically-acceptable conversation today.


> Raytheon, Lockheed-Martin, and Boeing [...] advertise their military hardware on the cable news networks

I had no idea that was happening. Sounds like American television is getting pretty wacky.


It's been like that for at least 15 years.

Wikileaks has always been a political project, why would you expect anything else from the left-wing anarchist CCC crowd that runs it?

wikileaks published the DNC emails which harmed the DNC and Hillary in particular. It was a factor in low turnout and Hillary's loss. Since then wikileaks has been associated with Russia / Putin / Trump (bad). The collateral damage era has been mostly forgotten.

What's interesting is, many of the same people who criticize the leaks for hurting Hillary's chances would also applaud the leaks for demonstrating DNC bias against Bernie.

That was a pretty "clever" part in the smear campaign.

First declare him a "rapist". But that didn't cut it.

So they made him a "Trumpist". And many people still believe it up to this day.


"they" did?

So emails and conversations that show the Trump campaign and family communicating with Assange over leaks and timings were ... what ... smear?

* https://www.theatlantic.com/politics/archive/2017/11/the-sec...

* https://www.npr.org/sections/thetwo-way/2017/11/14/563996442...

* https://www.nbcnews.com/politics/donald-trump/trump-campaign...

> On Oct. 3, WikiLeaks, writing a day before it was set to announce the release of hacked Democratic emails, wrote to Trump Jr.: "Hiya. It'd be great if you guys could comment on /push this story."

Yeah. Can't imagine why he might be considered a "Trumpist"...


"harmed" seems like an odd choice of words. The leaks made public a lot of exceedingly unflattering facts (and perhaps illegalities) that put the DNC and Clinton in a very different light. AFAIK, few or none of those facts has been refuted. To me, it looks like the leaks were a great public service.

(I have not much doubt that one could find a similar cesspool on the RNC side.)

The effect of the leaks would have been rather minor if all involved would have just issued a mea culpa and fired the head of the DNC outright. Instead, it felt like they just told us all to suck it up.


I wouldn't call it weird.

The whole Russia deal and him taking sides is something many can not ignore.


Although, that's kinda what you (read: Clinton) can expect from someone she helped maneuver into a corner...

Can you? Why? It's not like it helped him much to align himself with Russia (or Putins politics and world views) or did it? It didn't even help him to help Trump get elected by leaking this stuff at the right moment in time and it's not like Trump couldn't help him out (see Roger Stone or Flynn soon).

He projected himself as an Anarchist once. The independence in persona. This independence was what brought him trust. He lost all that. He did not lose that because the USA wanted to see him in jail. Snowden is still a respectable person for example.


If that ever really happened the way it was presented.

He presented himself at least on twitter this way.

Not really.

"Tell a lie enough times and people will believe it".

Sadly, that's still true. Even in people who consciously know that it was all lies. That's the human mind for ya.


The last US presidential election cycle really soured public opinion on Assange.

After the Hillary email leaks, Democratic Party rhetoric turned against Assange. Many in the party hold him culpable for Trump's election.

I don't know if he was favoring one party or the other (as some Democrats purport), or if he was a Russian mouthpiece, or if he was just leaking the information that came to him unbeknownst to the state actors at play.

He lost a lot of the goodwill he built, and that put him in a very dangerous spot.


I think one reason for the "Assange is an asshole" hate is that there are many people all over the West who love and believe in the USA as a unequalled force for good, and they are filled with resentment towards Assange for destroying that belief.

They can't deny the terrible acts perpetrated by the US, so they try to reclaim some moral superiority by accepting and repeating the completely unsubstantiated (and in some cases, ridiculous) smears against Assange.


It is downvoted because Assange became the enemy of many people in the so-called American "left".

Which is bizarre, Assange just hates the US in general. Of course he’d want the least competent people running the country.

He’s much further left than any US democrats.


Saying he hates the US is really unfounded and you're perpetuating fake news

It’s not unfounded, I’m neither speculating nor repeating news reporting.

Are you suggesting that it’s somehow a bad thing to hate the US and it’s war-loving people?


Probably bots.

Plausible.

More likely mindless people who are still bitter about the DNC being full of scumbags and getting called out for it

Probably because every time this comes up, it's the same one guy making asburdist claims like Assange's completely self-enforced time in the Ecuadorean embassy was illegal detention by the UK government.

For something to be self-enforced, doesn't one have to be able to walk away freely if one so chooses?

Not if there's a warrant out for your arrest, no.

No idea if its "the same guy", but this is a link to an interview with the UN special rapporteur on torture about his investigation into Assange.

That doesn’t sound too different from many prisons. Is it not detention if it’s “easy” to escape?

The "same one guy" is one way of describing Nils Melzer, here's another:

> The problem for the propaganda system targeting Assange is that Melzer is not just someone blogging on the internet; he is the UN Special Rapporteur on Torture. In addition, he is a professor of international law at the University of Glasgow and holds the Human Rights Chair at the Geneva Academy of International Humanitarian Law and Human Rights in Switzerland, where he has been teaching since 2009, including as the Swiss Chair of International Humanitarian Law (2011–2013). Melzer even speaks fluent Swedish. In other words, it is hard to imagine anyone better qualified to comment on the Assange case.

- https://www.medialens.org/2020/burned-at-the-stake-the-un-sp...


You might want to address the issue that his claims are absurdist nonsense rather than his CV.

That's your opinion. If you're aware of who Melzer is and his credentials (calling him "same one guy" suggested you weren't), maybe onus should be on you to explain why you think Melzer's claims are absurd. Have you read/watched interviews with him where he's answered many questions about the Assange arrest and detention?

Yeah, the threat of being illegally and immorally extradited and potentially facing a death penalty and being stripped of human rights really makes his asylum "self enforced"

Thank you for your post and for the fascinating (and terrifying) link. I have no idea why you're being down-voted.

I was planning to suggest submitting the link as its own "story" but apparently it already was, two weeks ago.[0]

[0] https://news.ycombinator.com/item?id=22201381


There are things in the interview that are rather subjective (and things that are objective, and that I didn't know, admittedly). But this is ... quite misleading...:

> A constitutional democracy would probably investigate Chelsea Manning for violating official secrecy because she passed the video along to Assange. But it certainly wouldn’t go after Assange, because he published the video in the public interest, consistent with the practices of classic investigative journalism.

An investigative journalist wouldn't suggest to their source that they cover their trails and implicate innocent third parties in the egressing of confidential data[1], assist their source with cracking passwords, passively and actively ("I'm working on it, no luck yet"), or in the event that Assange was doing neither, just leading Manning on, would not lie to their sources while again, still encouraging them to break the law.

[1] by trying to crack someone else's password to use to get access to the same documents you're effectively _setting that person up_ as a patsy.


>An investigative journalist wouldn't suggest to their source that they cover their trails

An investigative journalist should absolutely try to help their source stay safe from danger.

>by trying to crack someone else's password to use to get access to the same documents you're effectively _setting that person up_ as a patsy

The account they were attempting to access was a generic windows admin account tied to nobody. A perfect patsy.


> An investigative journalist should absolutely try to help their source stay safe from danger.

I'm not sure "commit more federal crimes" is "staying safe from danger"

> The account they were attempting to access was a generic windows admin account tied to nobody. A perfect patsy.

This I did not know. My above point still stands, but at least no innocent third parties are being also dragged into things, absolutely.


>I'm not sure "commit more federal crimes" is "staying safe from danger"

If your options are "commit more victimless federal crimes" or "allow the army to punish you" it is.


Maybe that is why journalism serves that many falsehoods nowadays? I don't even want to replicate your train of thoughts to justify the persecution of people making war crimes public, but the surface argument is quite weak.

We should all just blame the Russians, right? Please, in all cases where media organizations were implicated in spreading falsehoods, this is completely mundane.


It’s so far from the narrative we’ve been hearing for years now, than even reading your comment triggers a “that’s just some batshit conspiration theory” reaction on the moment.

Going through the link requires putting aside that reaction. I missed the link posted two weeks ago, and I am so glad you commented.

It’s really chilling and takes time to fully process.


And different countries are being fed different narratives, which is why discussions are so polarized and volatile.

Some countries are being fed both. Look at Ecuador.

Ex-president Rafael Correa decries the fact that Assange, who had been granted Ecuadorian citizenship in addition to his political asylum, was dragged out of a sovereign embassy by UK police. He notes that an IMF deal occurred somewhat concurrently with the surrender of Assange.

Current president Lenin Moreno claims Assange wouldn't clean up after his cat and smeared poop on the walls of the embassy. Moreno ejected Assange shortly after blaming WikiLeaks for the publication of photos and messages hacked from the Moreno family's smartphones.


Very well said.

And thanks for the interview link.


> a woman did NOT make a rape allegation against him in Sweden

The article you linked seems to contradict you. It says a woman accused him of surreptitiously removing a condom while having sex, which would count as rape in my jurisdiction.

The article also recounts testimony of him waking up a woman by penetrating her which would also count as rape in my jurisdiction.


> It says a woman accused him of surreptitiously removing a condom while having sex

Actually, she supposedly accused him of intentionally breaking it, but also said that she didn't notice until later, which casts doubt on the "intentionally" part.


Poking holes in a condom is an intentional act and the woman wouldnt know. So as long as we dont know exactly what she meaned by „intentional breaking it“ we cant really guess who is wrong or right.

I agree. More information is needed before any judgement could be made one way or the other.

Whatever happened with that condom, that definitely falls under "reasonable doubt."

Considering what happened to Assange after that, the first thing that comes to mind is the Gulf of Tonkin incident.


Whether that amounts to reasonable doubt is something that could be determined by a jury in a court of law following rules of evidence and procedure, if Assange went to trial.

The charges were dropped.

Because he successfully evaded capture for long enough that the case was no longer viable.

They could have interviewed him in the embassy but declined to do so.

It really isn’t their responsibility to accommodate a fugitive, is it.

It is absolutely the responsibility of the state to act fairly to all.

Which means not making special exceptions for some people, like Assange here.

Are you denying that his claims of being persecuted by the US government were unfounded, given that they have been proven right?

I deny that this is a valid reason to flee prosecution in Sweden.

Does that apply to all proven victims of state persecution or just Assange?

How on god's green earth does that "cast doubt" on anything? That makes absolutely zero sense.

Well, determining intent is difficult at the best of times, but it's hard to understand how you could determine the intent of something that you only noticed later.

The same way you do it many times a day in your everyday life?

I'm confused why you seem to so vehemently disagree with me but simultaneously will only speak in generalities.

Say for example I visit your house, and you notice many hours after I leave that a favorite ornament of yours is broken. How will you determine whether I broke it intentionally or not? If you watch me break it, you could make a reasonable judgement. If you only notice later, it's much harder. I don't understand what is controversial about this idea.


The article also explains that this is intentionally falsified by the Swedish police and lists emails as evidence that this tampering took place and contradicts the womans initial statement

The email conversation is not evidence of tampering. The emails is from one interrogator emailing another on how to best record the witness statements in their system. See this thread for English translations: https://news.ycombinator.com/item?id=22209868

The witness statement is supposed to be verified by the witness at the end of the interrogation. Editing that statement without consulting the witness is tampering, no system would save statements like that.

But the statement wasn't edited... It was copy-pasted into another record, just like the mail conversation shows.

Also, in this supposed grand conspiracy to frame Assange, two crime detectives were dumb enough to leave their marks in emails available to the public, due to Offentlighetsprincipen? https://en.wikipedia.org/wiki/Principle_of_public_access_to_... It makes zero sense.


The original statement was overwritten, we only have some vague references about what it says. But moving it to another record without the witness present is still tampering.

>Also, in this supposed grand conspiracy to frame Assange, two crime detectives were dumb enough to leave their marks in emails available to the public

The conspiracy was to get Assange into US custody, not frame him of a crime, and that conspiracy has been proven to he true. The allegations didn't need to be flawlessly done.


The FBI does that.

It doesn't count as rape in any jurisdiction, but it is Orwellian newspeak at its finest.

You are leaving out a LOT of the story in this post. Such as his connections with both the Russian state and the Trump campaign, or his person history of sexual misconduct and awful behaviour towards women.

Did you actually read the interview? It lists and explains evidence that this history of sexual misconduct is, at least in large parts, intentionally constructed by the public authorities

There are other cases that are not related to these particular two at all.

could you point me to some article about that?

This is an article about general absolutely awful behaviour: https://theintercept.com/2018/02/14/julian-assange-wikileaks...


While both these articles go into details about unsavory awful behavior, there is no indication of (additional) sexual misconduct in them. Claiming that there are more cases despite that seems misleading. While I do not approve of Assanges actions and statements as detailed in the intercept article, being a douche is not evidence of sexual misconduct

It is not, but there is a strong correlation, and you should bear it in mind when evaluating how likely you think those accusations are to be true.

I am sure I have read other more direct accusations, but unfortunately I am not finding the sources at the moment.

I still think these articles are more than enough to cast some pretty strong doubts over the claims of his defences, and I think nobody who has read them should be quite so eager to run to his defence.


Assange can be a horrible person and also be targeted for his political actions. The Snowen documents confirmed that the US had been targeting him since around 2008.

Any Trump or Russia connections do not make the Manning leaks a crime.


True. But it also does not make him a good person, or worthy of support. He very deliberately turned Wikileaks into a propaganda tool for Russia. And their motives are no more pure than those of the US.

He has done massive damage by allowing his organisation to be used to promote the oppression and cruelty of the Trump regime, and he has throughly poisoned the waters for anyone else wanting to leak information.


>He very deliberately turned Wikileaks into a propaganda tool for Russia. And their motives are no more pure than those of the US.

Though I don't agree with this statement, that is not a crime and not what he is being charged with.

>He has done massive damage by allowing his organisation to be used to promote the oppression and cruelty of the Trump regime, and he has throughly poisoned the waters for anyone else wanting to leak information

I agree, seeing Assange get tortured for 8 years probably has caused people to become less willing to publish leaks. I don't see how you can blame the person undergoing such treatment for trying to escape it.


> Though I don't agree with this statement, that is not a crime and not what he is being charged with.

I am not interested in what is legal, I am interested in what is moral.

> I agree, seeing Assange get tortured for 8 years

Assange tortured himself for 8 years. He was free to walk out the door every hour of every day of those eight years.


"it also does not make him a good person, or worthy of support."

What's that got to do with anything? How about just upholding the rule of law? Justice isn't supposed to be a popularity contest.


What law is not being upheld?

As I said in cousin comment: his access to lawyers, right now. For starters.

1. He was not accused (by Swedish, UK, or US authorities) of being a horrible person, and if he were, they'd be laughed out of court.

2. Even if what you said was provably true (which, to stress, it is not. Go read the article I linked) then none of what you listed would justify stripping him of his fundamental rights like access to a lawyer, protection from cruel and unusual punishment, etc.

3. You totally fail to mention the rest of the context like... oh I don't know... that he published documents that clearly and without a doubt documented war crimes and others crimes that had wilfully been covered up.


He was never, for even a second, stripped of his fundamental right of access to a lawyer. He, by his own free will, chose not to take part in the legal process that would grant him his access to a lawyer. Nor was any cruel or unjust punishment enforced on him. No punishment whatsoever was enforced on him. Every thing that happened to him happened, again, of his own free will. He chose to lock himself in a house. Nobody forced him to do so. He could, every hour of every day of all those years, open the door and walk out.

Seriously? Read the article. Especially the later part about, how the UK and the US are... denying him access to his lawyers.

The article makes so many misrepresentations, I am not very inclined to take what is says at face value.

It tries hard to make it sound like he does not have access to a lawyer, but what it actually seems to say is that he is not able to bring documents to his cell from the lawyer that he does have access to.


The article is not completely truthful about what happened, at least according to the police report. The interviewee claims to speak fluent Swedish, so there's no way he'd miss it accidentally. Here's the short version of what happened according to the official police investigation[0]:

Assange and a woman have consensual sex. He wants to have sex without a condom, she insist he wears one. Eventually he agrees, and they have sex and then fall asleep in the bed. The next morning she wakes up by him penetrating her. She asks "are you wearing anything?" to which he replies "you". Her main reason she wanted him to wear a condom was fear of STDs, so she thinks there's no point in stopping now since the eventual damage is already done, so they continue. He comes inside her.

When she later asks the police if there's a way of forcing Assange to take an HIV test the police decides that what she's describing constitutes rape, and starts an investigation.

The whole problem is that she gave her consent under the strict condition that he'd wear a condom. When he started having unprotected sex with her while she was sleeping, he didn't have her consent. Call that what you want.

[0] https://www.magasinetparagraf.se/wp-content/uploads/content/...

Edit: OK, I see that that's mentioned later in the article. But the part about rewriting the statement isn't completely truthful either: > "Now the supervisor of the policewoman who had conducted the questioning wrote her an email telling her to rewrite the statement from S. W."

What the mail actually says is that there are two hearings, but only one formal one, and they want the second one included too. The supervisor writes: "Make a new hearing. Paste the text in that and assign the hearing to the case. Sign the hearing."


He seems to think the official docs were altered. Either way, his point is that we are all focused on this trial and accusation, rather than the numerous war crimes that WikiLeaks exposed, and that was the intent all along. Mission accomplished.

That's not what's stopping anyone from investigating the war crimes though. Even if Assange never would have com to Sweden and never would have been accused (guilty or not) of rape, those war crimes wouldn't have been any more investigated that they are now. Look at Khashoggi, nothing happened. Look at Russia's annexation of Crimea, there's some sanctions but the current US administration is against them. Look at China, they have concentration camps for Uighurs. And so on.

I don't have any problem believing that Assange did have sex with a sleeping woman without a condom, it fits what I've perceived as his personality. And him locking himself in the embassy after having exhausted his legal means in the UK was his own doing. But that's all about one person, the war crimes are part of a system that doesn't care about him.

This whole mess is candy for conspiracy theorists. Remember that Wikileaks fanned the flames in the middle of Pizza gate: https://www.reddit.com/r/IAmA/comments/5c8u9l/we_are_the_wik...


Stop using "conspiracy theorists" as a dirty phrase. Gains no support for your statement.

I wouldn't believe the police report, the timing of these allegations isn't coincidental, nothing in this case is, and it would not be the first time police have manipulated a witness into giving false or misleading testimony.

While not perfect Sweden is probably one of the most judicially sound countries on earth. I doubt that the Swedish police would intentionally falsify reports/statements to please US interests. This is just absurd.

Melzer has said he was of the same opinion until he started looking into the case.

Don't underestimate how closely tied many European countries are to the United States. Swedish security forces snatched two asylum seekers (that we know of) off the streets in Stockholm in 2001, and handed them over to the CIA. They were immediately flown to Egypt, where they were tortured. All this was done in violation of both Swedish and international law.

Melzer has said in interviews that countries like Sweden have strong legal protections, up until the point at which they consider their own security interests to be at stake. At that point, you can expect all sorts of violations of procedure and law, dirty tricks, etc.


>>The whole problem is that she gave her consent under the strict condition that he'd wear a condom.

It's called "stealthing", apparently, and it looks like it could be an offence under UK law as he only had "conditional consent": https://www.kingsleynapley.co.uk/insights/blogs/criminal-law...


I believe stealthing generally is taking the condom of during sex without her noticing it, but the idea is the same. Consent given under a certain condition is only consent while that condition is met.

I was trying to find out if they were a reputable news organization, so I did a search, which wasn't very helpful. So then I just started clicking around the website. If you go into Top-Storys, it looks like a link aggregation website even.

Finally, using Chromium, I was able to translate some pages and I guess they are a legit news organization, though quite young. Their first publication was in January 2018. From reading the wikipedia, it seems they operate a bit like lwn, in that direct links will give you the articles, but you don't get access to all of the direct links unless you're a subscriber.

https://de.wikipedia.org/wiki/Republik_(Magazin)


> The article is not completely truthful about what happened

The article discusses the exact things you're saying it conceals.

> The interviewee claims to speak fluent Swedish, so there's no way he'd miss it accidentally.

Before accusing the interviewee of dishonesty, perhaps you can read the parts of the article where the interviewee discusses the very things you're saying he omitted. Did you just miss these parts of the article accidentally, or did you intentionally misrepresent the article?


This comment breaks more than one of the site guidelines. Please make your points without crossing into personal attack.

https://news.ycombinator.com/newsguidelines.html


The above comment attacks Melzer as dishonest for omitting information, and says there's no way Melzer could have simply missed the omitted information.

I repeated the commenter's own words back to them, showing the irony that they (the commenter) were the one who omitted information, in a way that is very difficult to explain if they had read the article. I think that's legitimate.


Please make your legitimate points in ways that don't break the HN guidelines. Personally attacking the person you're replying to isn't part of the legitimate point, and is easy to factor out if you want to. For example, your second paragraph there could be reduced to "the interviewee discusses the very things you're saying he omitted", and to make it more helpful, could have included specifics about where and how he discusses those things.

> in a way that is very difficult to explain if they had read the article

It's easy to explain: people remember completely different things in the articles they read, based on their pre-existing feelings and assumptions—just like we see very different things in the world in general. When your feelings and assumptions aren't the same, it's too easy to jump to conclusions about other people, especially online, where we have so little information to go on, and inevitably make up stories about each other to fill in the blanks. (Not talking about you personally—we all do this.) Because these effects lead to stuck, repetitive, and ultimately nasty discussion, the site guidelines try to mitigate them, and commenters here need to follow those.

https://news.ycombinator.com/newsguidelines.html


Someone writes a long comment accusing the interviewee of intentionally omitting information, and forgets that the interviewee talked about that information at length?

My working hypothesis is that the commenter probably only skimmed the article, if that. But the really unforgivable part is that that incorrect comment was then voted to the top of the thread. It looks like a lot of people are voting before reading the article.

I'll keep within your guidelines, but my intention was just to point out the irony of the commenter falsely accusing the interviewee of doing something that the commenter themselves was doing.


Rape falls under public prosecution. That means that the prosecutor is forced to press charges whether the plaintiff wants it or not. According to the two women he did some questionable things that the prosecutor thought could be enough to convict him. The two women's stories are out in the open while Assange instead opted to flee the country and has not commented on the allegations.

What I think is a shame is that you are accusing these two women of being part of a smear campaign and/or a secret conspiracy to frame Assange when there is exactly zero evidence for that theory. We don't know what happened in those bedrooms - it's word against word. But the words of two people generally weighs heavier than the word of one. Especially when this one refused to be questioned by the police.

EDIT: It's fine if you downvote my comment or flag it. It's still true.


Are you that naive? Evidence and witness testimony can be easily fabricated by authorities (happens all the time), especially when the authorities are out to get someone and have political and personal reasons to do so. What's your reasoning for believing these women's testimony other than made up fact by police?

I don't understand what you mean. The women who have testified against Assange are alive and well. One (Anna Ardin) is a public figure and both are active in Swedish left-wing politics. The idea that these women would be part of a conspiracy to frame Assange is ludicrous as they both supported Assange's cause. Their testimonies are public knowledge and if they didn't agree with what is written in the records, it would be easy for them to protest it.

Also, you have to offer me some evidence that Swedish authorities can easily fabricate evidence and witness testimony and that it happens all the time. Not that I know how it is relevant since the witness testimonies in Assange's case are clearly not fabricated.


You can not prove that the evidence is not fabricated or that witnesses are being truthful or were not pressured into their statements, you chose to believe in what the authorities put out there (that is your choice, it does not prove anything except that you believe the authorities). People say things all the time, that does not make it true, just because two women said something does not make it more true than if only one said it and looking at the circumstances it seems like a lot of powerful forces are at work against Assange. You think it is not possible at all for "evidence" like this to be untruthful and that damaging Assange's reputation would be of benefit to some? You chose to look at the world in a certain way, that things are good and that authorities can be trusted however if Assange and Wikileaks have taught us anything it's that that premise is not true. It's not a conspiracy, it's what's happening, you just don't want to see it.

Can you prove that they were? As said, these women are alive and well and still active in left-wing politics. If they didn't agree with their recorded testimonies they would have had ample time to set the record straight! Sweden isn't a police state like China.

What is your reasoning for believing they WERE made up?

It certainly isn't evidence of any kind.


The interview in the top-level comment seems to cover that.

The interview with the guy who is making a claim that it's all a conspiracy based on... him not knowing how the Swedish justice system works?

"Why would a person be subject to nine years of a preliminary investigation for rape without charges ever having been filed?"

Because in Sweden, the suspect is first interviewed, and then charges are filed, and Assange himself has chosen to not make himself available for interview?

That is not a great way to establish credibility, and the tone he says this stuff in is entirely that of someone who is pushing an agenda with little regard for truth.


I was thinking of the bit where he says

"Now the supervisor of the policewoman who had conducted the questioning wrote her an email telling her to rewrite the statement from S. W."

and then provides physical evidence of that.


That, as far as I can tell, is a misrepresentation.

>Assange himself has chosen to not make himself available for interview

*Except for the appearing at the police station and giving an interview with a policeman, then spending three weeks scheduling multiple meetings that the prosecution deferred, getting written permission to leave the country, agreeing to return if assurances would be made that he would not he sent to the US, offering to conduct the interview in the UK or over video link as allowed by Swedish law.

Besides, why would they organize such a grand conspiracy case only to drop the investigation shortly after a US ally obtained custody of Assange?


>What I think is a shame is that you are accusing these two women of being part of a smear campaign and/or a secret conspiracy to frame Assange when there is exactly zero evidence for that theory.

Those women appear to be an unwilling part of the smear campaign, it is not a shame to call attention to this fact. The records provide ample evidence for this theory, such as their direct statements, the ordered rewrite of their complaints, and the media reporting statements the day before they were made.

Even if they have legitimate complaints against Assange, theiy are being used as part of a smear campaign.

>The two women's stories are out in the open while Assange instead opted to flee the country and has not commented on the allegations.

People that flee the country don't typically wait until they are denied a visa, and then check in with the authorities immediately before leaving the country. Assange has repeatedly opted to comment on the allegations, he just wants assurance they wouldn't send him to the US. Sweden refused to let him.


If you carefully read Nils Melzer's report(s), you'll see that both women are unwilling victims of what's going on.

Wow... I couldn't disagree more with your EDIT. This is flat out untrue. I didn't say they were smearing him, and neither does Melzer. Did you read the article?

He’s not being extradited for rape, he’s being extradited for being a Russian asset. The rape allegation being falsified could be completely accurate and still irrelevant to his crimes. He stopped being protected as a journalist a long time ago.

That link white washes Assange to an incredible level. Snowden was a hero, Assange is a Putin lackey who got caught.


It's sad to see this conspiracy theory being voted to the top.

For HN, the only cause more motivating than functional programming seems to be accusing women of lying. With, in this case, often the gall to claim that this would somehow "protect real victims".


"For HN, the only cause more motivating than functional programming seems to be accusing women of lying."

That's just dumb.


Okay, I have to say I'm sorry for the phrasing of my comment. Not because I think any of it is untrue, but because I could have foreseen that the "NOT" part is not explicit enough about what I mean. So here goes...

What I did mean (and I take this straight from the article) is that at least one of these women explicitely refused to sign a statement speaking of rape, because that was not what she came to report, or what she thought happened. She texted as much to a friend, too. Yet the police not only went ahead with an investigation based on a statement that the only witness had refused to sign, but they also altered said statement afterwards to re-open the case when the first statement was disregarded as a credible account of what happened, yet not an account of a crime. So I'm explicitly not saying that this first woman lied. She said the truth, told her story, and in her story, there was no rape. A prosecutor in Stockholm agreed with her, as did Assange. So this is the word of the police, against the prosecution, the defendant, and the purported victim and only witness. Hence my phrasing of "a woman did NOT make a rape allegation". I hope that clears that up.

And note the singular in that sentence up there. The second woman is different of course, and at least she clearly stated a rape allegation. I'm not the one to pass judgement on that. But the people who are, and who have seen the evidence, seem to not be convinced. Further, at the point where the second allegation even enters the picture, the case had already completely derailed, so if or if not the breaking of the condom was intentional, and if or if not such a condom could credibly hold no DNA whatsoever is pretty moot at that point.


Assange attempted to appeal his extradition from the UK. One way to challenge that is to show that what you are being accused of isn't a crime in the UK. The UK High Court judgement is informative. What Assange is accused of (the 4 counts) would be crimes in the UK.

https://www.bailii.org/ew/cases/EWHC/Admin/2011/2849.html

Just one bit from your linked article: “But in the morning, according to the revised statement, the woman woke up because he tried to penetrate her without a condom.”. I'm sorry, Assange tried to penetrate a woman who was asleep?! That's not right


The accusation here seems to be that the revised statement was simply fabricated by the Stockholm police without any input from the woman.

>. I'm sorry, Assange tried to penetrate a woman who was asleep?! That's not right

Though the authenticity of this claim is dubious, the correct response would be to send him to Sweden for trial. That is not what is likely happening, he is being targeted for his political actions.


did they also lock the accounts of US government agencies and representatives?

No.

Why would they?

They locked the account of a pro-assange org, so it seems that they should be applying such decisions equally. After all, the government isn’t guaranteed free speech - that’s a right of the people.

> so it seems that they should be applying such decisions equally

No, why would they do that?

> After all, the government isn’t guaranteed free speech

Twitter is a private organization. There is no such thing as "free speech" in the private world.


Twitter closes Twitter account.

Twitter owes you nothing. Don't use them.


I don't.

But I also don't like Facebook's forced news.

Reddit is decent as long as the subreddit isn't big enough to be astroturfed... But big companies can do searches...

This place is probably the worst out of the social networks I use for censorship.

Snapchat is good


> But I also don't like Facebook's forced news.

There's nothing forced about FB. You don't have to use FB, it's that simple.


No but given the place it occupies in today's world, everyone's entitled to their opinion on FB and the harm it may cause, whether they use it or not.

> given the place it occupies in today's world

I don't use FB. FB occupies no space in my world.


It's not enough for me not to use them. Everyone else must be made aware of what's going on so they don't get taken in by the manufactured false consensus.

So I will complain.


WL should create a gab.ai account.

Twitter is a shameless, shit company.


It is no longer locked

> He flew with Scandinavian Airlines from Stockholm to Berlin. During the flight, his laptops disappeared from his checked baggage.

Laptops in checked baggage? Seriously? I really cannot believe that Assange wouldn't know better than this.


I'm more interested in how did he manage to check them in. Where I fly I can't put anything with a battery into my checked in bags...

it's pretty sad to see otherwise extremely bright people who are able to spot censorship in China, Russia etc, and who normally would stand up for human rights, to applaud or defend this Twitter shut down of Wikileaks. It is exactly the goal of propaganda to give people the ability to spot its effects when it comes from the enemy and make it invisible when done by their own state.

"This is America" -- sings Childish Gambino (... actually this is Everywhere, thanks to global reach of US propaganda and the West alignment with US language in policy)


All I would say on this is, don't cheer when your political opponents get deplatformed, and then complain when it happens to your side of the fence.

> Twitter locks WikiLeaks account days before

Was there any story with WikiLeaks account on Mastodon?


Worth to mention that Brazil charged Intercept co-founder Glenn Greenwald with cybercrime charges similar to Assange. The guy who first met Snowden in HK with Laura Poitras.

Even if it looks like they have "postponed" prosecution for now.

https://theintercept.com/2020/02/06/glenn-greenwald-intercep...


unbelievable

Big thanks twitter! You really never suprise me any positive actions. So sad that we have big tech companies like that existing today.

I'm ditching twitter long time ago.. too much sensor and negativity from that company.


Legal | privacy