Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

“Media Matters created an alternate X account and deliberately followed sensitive accounts to curate posts and get advertising to appear on the account’s timeline to then misinform advertisers about the placement of their posts. These contrived experiences could be created on any social media platform.”

“A spokesperson for X, Joe Benarroch”

I’m inclined to believe this. How the hell else would nazi content show up in someone’s feed without they first engaging with that sort of content?

I once used Twitter to look at profiles of criminal factions in Rio, naturally, my feed instantly began showing branded content alongside crime. Certainly the engineers at Twitter could look into the matter to mitigate the issue, preventing ads to show up when the feed has unlawful posts alongside it. But how would they do that without surveillance upon users? Seems unfair to attack Musk and the company in that context. It’s a problem inherent to the user-side of the equation, trying to control it would only ensue more problems.

I was totally indifferent towards this Media Matters thing, now I think they’re full of shit. Guess that will backfire for them.



view as:

Uh, if this claim is true… so what? Isn’t it still exactly what they are saying?

“Ads are being shown alongside antisemitic content”

The spokesperson confirmed their statement, didn’t negate it.

And not sure how it could backfire, Presumably a spokesperson admitting that the statement you are suing over is literally true would count against you in any lawsuit.

Just another screaming tantrum from the toddler in charge, nothing else.


It’s not about that it happened, it’s about how MM intentionally misrepresented the facts to hurt the image of the company. It has nothing to do with Musk, or whether one likes or dislikes him. I despise him, for that matter. But this is simply unfair. This is absolutely not the default user experience for regular users, and the spokesperson is right. Does it begs a review of the ad placement mechanics? Sure. Is MM interested in working together with the platform to improve this situation? Doubt. Did they leverage their fabricated evidence to promote a scandal? Definitely. Does this sound like an ethical move to you?

Does it sound like an ethical move to host this kind content on your servers at all? This isn’t default behavior but why is it behavior that can be achieved at all?

Remember that social media platforms are in full control of what they tolerate. It’s their servers. They have all the data.

It doesn’t matter how “scummy” MM is if X has no ability to remove objectionable content or at least mark it for non-ad placement. Their failure to moderate is not the advertisers’ problem. What this shows is that X has insufficient control and can’t implement basic standards.

Then on top of all this you have a shadow CEO who can’t control his impulse to publicly agree with anti-Semitic content.

That, my friend, is what you call an unforced error.

Advertisers are only going to get more demanding with media platforms as AI technology advances. There won’t be an excuse for being unable to detect hate content. They’ll be expected to have content evaluation fully automated.


I do not disagree with anything you said. Only that’s a different subject from the event being reported. If we were to talk about content moderation, then the first thing I would address is that the CEO appears to be an absolutist in regard to freedom of speech, whether this is a genuine philosophical stance or just an economic strategy (hate content drives traffic and at the same time is expensive to moderate) is a matter of debate, but from what I gather it’s probably the latter. That’s why I think he’ll certainly not be the one championing safe online environments. But in any case, “detecting hate content” is the fastest route to outright censorship, since the definition of “hate content” will always come from those in power. Hate content can be anything someone powerful don’t like. How we manage that? Where we draw the line? Honest questions.

If you run a social media site and you want mainstream businesses to advertise alongside posts on your site, you have to engage in content moderation. It’s hard, but advertisers demand it. Minimally, you must identify content of the type that advertisers don’t want to be associated with.

As to where you draw the line, it’s defined by advertiser preferences. I think YouTube’s ‘advertiser friendly content’ guidelines are a solid model. They have been evolving over time, and they provide substantial detail about the kind of content that is (and that isn’t) considered advertiser friendly.


I was reading Nazi tweets just last night, because that's who Musk was chatting with. That's the opposite of "hard to find" on twitter.

Legal | privacy