> People are leaving the site because of it, and advertisers are pulling ads”
This seems to be, arguably, where the real problems start. I doubt many people expect Twitter to host actually illegal content. However, as long as a social media site is beholden to advertisers, it's going to have an existential requirement to pull certain content, irrespective of how they or anyone else feels about those restrictions.
Until Twitter can figure out a way to fund itself other than by advertising, the idea of it being a free speech platform is a pipe dream.
Advertisers are responding to the expectations of the public. "Brand safety" is an issue of public perception. If people didn't care about brands being side-by-side with white supremacist hate speech, there'd be no push from advertisers to remove it.
So my hypothesis is that, indirectly, the advertisers are helping represent the interests of the vast majority of users who don't want to see that crap, but are unable to have their voices heard over the clamouring minority who want to tweet out swastikas and misgender trans people for the lulz.
This seems to be, arguably, where the real problems start. I doubt many people expect Twitter to host actually illegal content. However, as long as a social media site is beholden to advertisers, it's going to have an existential requirement to pull certain content, irrespective of how they or anyone else feels about those restrictions.
Until Twitter can figure out a way to fund itself other than by advertising, the idea of it being a free speech platform is a pipe dream.
reply