If a post is being pitched as news, then some amount of fact checking needs to take place. Traditional media has been doing that forever and identifying sources of information, since they have to uphold their reputation. I think in this day and age the responsibility falls on Facebook, or whatever is publishing the "news". Facebook could use a standardized and easily recognizable visual language to indicate how "newsworthy" a given post is. Heck we now have an animated language for reactions, why can't we have this?
I would be cautious about outright filtering content without giving user some way to set thresholds. It feels like handing way too much power to the algorithm. I want to make the final (informed) decision on what to read and what to believe.
Have they? I got to admit I used to think this too. But in the past few years I've had quite a few occasions to talk to the media due to them covering a topic I happen to be (publicly) knowledgeable about. I've been quoted in a lot of media stories in various highbrow outlets. So I've been approached by journalists a whole bunch of times for stories.
However, I have never been approached by a fact checker. If news firms were routinely doing fact checks, I'd expect to see
1) Way fewer obvious mistakes that could be detected with 60 seconds Googling
2) Fact checkers emailing me as part of cross-checking stories they're doing on my area of specialism
But I see (1) a lot and (2) never. I've also never heard any journalist refer to fact checkers or tell me to expect my statements to be fact checked, or actually seen any evidence of these people existing at all.
I would be cautious about outright filtering content without giving user some way to set thresholds. It feels like handing way too much power to the algorithm. I want to make the final (informed) decision on what to read and what to believe.
reply