A lot of this comes back to money. There’s a propaganda network measured in billions of dollars annually which reliably amplifies these messages, and I think that’s probably the angle to pursue. A lot of this stuff would stay on 4chan if people weren’t pouring money into broadcasting it, and many advertisers will cave if asked why they’re funding white nationalists.
Leave them on mainstream forums where their words can lead to greater amounts of recruitment from the population? Reducing their recruiting arm to smaller-and-smaller portions of the internet is the goal.
The white-nationalist idea of White Genocide and "The Great Replacement" holds a great deal of power. As they discuss this nonsense in the mainstream, it only makes more and more believers come to their side.
Ah, we’ve had nonsense like that all over the internet for decades.
Liberal pundits are saying similar about conservatives ... elected officials call for disruption in the street, 4chan has crazy stuff, yaddha yaddha.
We have incitement and conspiracy laws. We have an FBI and Secret Service that really take this stuff seriously and are well financed. If anything these posts make it easier for them to keep an eye on the tiny percentage of loudmouths that might actually do something.
There’s a cost to all this ease of expression, to be sure. Maybe people can be radicalized more easily, and that sucks.
But we’ve built this whole society on erring in favor of more free expression, and by and large this value has been a tremendous success, where the downsides are absolutely crushed by the up side.
Nothing will be an unmitigated good but in this case it’s pretty clear where the balance lies, and what these companies are doing -as we speak- isn’t the winning choice, for anyone involved.
they could just use their algorithm to make unwanted content insignificant...
This will not quench the problem with nationalists, it just drive them to other plattforms and they will become more radicalized because they only hear their own voices there.
I think deplatforming is exactly what we need here: neonazi and white supremacists are people who are reliant on easy to use discourse-amplifying platforms.
The more you make it difficult for them to gather virtually, the more you’re breaking the spell that keeps them together.
Before social media, physical neonazi or white supremacy chapter had A LOT of work to do to radicalize people, and were very easy to police and keep under control. With online-based organizational tools, radicalization became easier and harder to patrol.
Do you think a random Arkansas soccer mom would have ever joined anything as crazy as Q-anon conspiracy, if she had to attend Q-anon chapter in person instead of participating in an online forum while doing the laundry?
These people are already on openly visible platforms, such as 4chan and 8chan, where the Christchurch terrorist was radicalized.
By giving their hate and propaganda exposure on mainstream (social) media, you give them a platform to spread their ideology to more people. You have to realize that these people do not care about the truth, they do not care about debates, all they care about is being heard and seen.
Debating them does not work, it only gives them more exposure.
The only way to deal with people who use these sorts of tactics is to deny them the "debate", deny them a platform and deplatform them from whichever platforms they already have. Shut off their access to the people they are trying to radicalize.
Obviously this does go somewhat against the "rules" for polite conversation and debate that we've all been taught. But there is no debate to be had with fascists, everything they say and do eventually boils down to "we will kill you". That is not an ideology you can reason with.
Are we? Because articles such as this seem to blame 8chan and 4chan (they never mention the rest of the site which has stuff from /g which is GNU stuff and the transexual and gay communities on there). Back in the 90s they blamed Doom, Heavy metal and Rap music. In the 1950s and 60s they blamed horror / slasher comics.
As for giving them a platform and an audience bringing more followers, firstly this is terminology used for justify soft censorship. Secondly in the UK back in 2010 Nick Griffin (a notorious anti-semite/racist and leader of the BNP) was allowed to speak on Question Time (a very popular show on the BBC). Afterwards we didn't hear from the BNP again because the ideas were exposed for what they were. The BNP party is effectively dead in the UK.
Almost every-time one of these racists are actually spoken to the vast majority of the population reject their message. Every-time it is repressed these ideas resurface because these people create their own echo-chambers and makes these ideas sexy as they are considered taboo.
What about fox news? AM radio? These are bastions of radicalization but they dont let anyone come on and say anything. At the end of the day this sort of rhetoric played by these groups is taught in university communications classes as a way to exert influence. Its all just propaganda at the end of the day, and that can come in the form of a pamphlet, or a meeting in a town hall, or from some talking head on tv, or a tweet. Social media is just another avenue for propaganda to manifest just like how the printing press is.
Who on earth says they need or deserve a "public channel to express themselves"? The world seemed to get by just fine before social media existed, and I have no doubt we'd all be perfectly fine if it were gone tomorrow. If the white supremacists are stuck going back to physically mailing letters around and struggling to organize that's a net win for society...
Where exactly did you come to the conclusion they're expressing themselves peacefully? So far we've had the Capitol stormed, and a thwarted attempt at kidnapping and murdering a state governor. Neither of those two events are anything resembling "peaceful".
OP here, just chiming in to say that you are exactly right on my point and what my objection would be to that study. There is a distinction between the Internet and specific sites like 4chan, 8chan, Reddit, Youtube, etc. You can accidentally stumble onto hate on those sites. The hate there is both normalized due to the presence of that other content and can be framed in an enticing and seemingly logical way. That isn't true for the Internet at large. To repeat myself, you can't really stumble on to the Daily Stormer or be accidentally recruited into their ranks. The NYT's article I linked to in my first post details how that type of accidental radicalization can happen on Youtube.
And like you said, there is nothing internet specific about this distinction. The same thing applies if white nationalists are recruiting in the physical world. There is a lot more potential for recruiting new members at the local bar than their is at a KKK rally. I think some of us just want the bar owner to stop allowing those white supremacists to use the bar as a recruiting ground because they are turning violent.
It's a real problem. It's easier to suppress such content, but the problem is, it just goes elsewhere where it is almost completely unchecked, and it just proliferates in much darker circles as a result, and we have even less exposure as to its true volume.
Maybe there should be more of an effort to reduce peoples' incentive to engage in that sort of behavior in the first place. Why do people join violent extremist groups? Why do people engage with CP? Why do terrorist groups exist? Is it just human nature? Is it a fact that with 7+ billion people we are destined to have millions of people engage in this behavior?
De-platforming horrible material is better than nothing, but it feels like whack-a-mole
This is a very hard problem that YouTube and Facebook made for themselves by becoming the world’s largest advertising platforms. They depend on engagement for ad revenue, they designed world-class algorithms to promote this engagement, and it turned out that extremist content happens to be very engaging.
And so the problem is, building an algorithm that blindly promotes whatever keeps users on the site, for all its complexity, is a far more tractable problem compared to building a system that can avoid promoting content that promotes violence. In the meantime, they throw armies of people at the problem, to moderate content and respond to user reports, but it’s a losing battle.
They had the technology to create a monster, but don’t have the technology to stop it.
Even though I do believe that insane right wing propaganda going unchallenged is an actual problem, I agree with you that it's difficult not to view this with worried skepticism and the expectation of further backlash.
I tend to think there is no real solution to that sort of thing except being regularly exposed to opposing viewpoints. I definitely don't think a government or a private organization should take it upon themselves to decide what is crazy and what is okay.
Instead of talking about pedophiles, they should talk about racists and far right extremists using chat platforms, that would rally a big part of the press in support:
> White supremacists openly organize racist violence on Telegram, report finds
One obvious answer is revenue. Getting money in is a real struggle for extremists, who tend to get banned from traditional platforms. Think of it as like paying membership dues.
reply