1. 'Misleading Information' is a targeted category, specified only by manual intervention, leaving the door wide open for selective enforcement and biased enforcement.
2. The 'context' they put in the 'Trending' section is often highly editorialized and skewed. An example from yesterday:
> Celebrities · Trending
> Mel Gibson
> People are expressing disappointment that Mel Gibson has been cast in a new film.
> 18.2K Tweets
Mountain out of a molehill, selectively chosen, click-bait editorializing ('people are expressing') - none of this is helpful, and doubling down on it as the only content to put in that pane seems quite foolish.
> Transparency: We don’t censor, bias, filter, or downrank results (unless legally required to)
Is it me or does the description not match the bullet? To me 'transparency' would be 'we detail on this page exactly how we bias/filter/downrank results (except where legally required not to)'.
Second point somewhat implicit in the first: how can a search engine be said not to bias/filter/downrank results? That's the whole job? I think what they mean is they have some generic algorithm, and there's no special casing going on to further political or social ideals etc.?
Circling back to the first - I think you can even do that (if you want) and claim transparency, you just have to say 'we aggressively downrank anything to do with XY rights' or whatever, and it might not be what everyone wants but it will be transparent?
> Admitting the possibility of bias is very different than admitting bias in every single trending content decision.
They don't have to admit bias in every content decision. They have to admit systematic bias in content decisions in the aggregate, which does not necessarily imply bias in every content decision.
While it is technically, theoretically possible that their system is open to the introduction of these biases but does not actually exhibit those biases in the aggregate, it is astoundingly unlikely.
From reading that, the only criticism that seems sane and fair is that they should have chosen a different string to "trending"; "recommended" perhaps, or "curated". My expectations that i'm going to receive a much-needed education in some topic from occasionally clicking on the "trending" link on facebook aren't especially high; perhaps yours are a little misplaced if present?
Also, I note that that article states that other "curators" denied such a bias exists. Facebook have denied such activities. Ok, being sceptical, they would say that. But it could be bullshit spewed by an ex-exmployee So, where's the study? The data? How do we show whether or not there is a bias, assuming you care enough to investigate? It's just more conspiracy theories and "typing" isn't it.
Wow, that is horrible. And equally horrible is the fact that you don't seem to find this problematic at all. If the features were/are regularly used is not really relevant: "algorithmically selected" implies a certain non-bias in the process, but if this is not how it really works then one should be much much more skeptical about the validity of any "trending" topics.
> How do you make the jump from disapproval of DDG's standard for misinformation and how it's applied to the idea that they shouldn't be involved in filtering of misinformation at all?
It is quite easy to disapprove their current standard on record because according to it, misinformation can be coming from a "Russian" source only and we all know there is much more misinformation in the world than that. Can we agree on this?
I am not saying that a search engine shouldn't be involved in filtering of misinformation. On the contrary, I think that DDG (and any other search engine) should absolutely be in the business of filtering all misinformation they can. Key here is "all".
But by being selective, and in this case based on a particular political view (and I use the word political in the context of world politics), introduces a bias which may negatively affect its users, without any particular benefit.
how much of the uproar over this is about implementation vs intent? disinformation isn't worth being returned when the user is seeking information. like if you are looking for the health impacts of soda, you probably don't want coke writing your answers.
I'm all for neutral platforms, but it feels like you either opt out of editorializing and end up with trash driven by SEO, or editorialize and marginalize some content producer or consumer. any play feels like a losing move from the business's perspective.
Not just that but bias, they intend to promote certain content and demote others and that introduction of human bias essentially dillutes the effectiveness of the algorithm; you don't get what you are searching for per-se but someones opinion of what you should get.
Ouch. Some pretty damming stuff there. I can see spokespeople getting something wrong, but multiple points that are seen as highly contentious in SEO land being explicitly denied and the docs say opposite? Not much wiggle room there
> Besides, what's so bad about access to "disinformation"?
Disinformation doesn't just try to present an alternative view, it also tries to drown out other viewpoints. I don't think it's possible to accurately and neutrally present search results when bad actors try to subvert rankings.
I think propaganda can be interesting, I don't think it should be banned. But penalizing it seems fair to me. I don't want other searches to get flooded with clickbait.
Amount of text is seldomly a metric for good or bad texts, some stuff needs explaining.
Especially when dealing with matters like this where others do not warrant their positions.
> This portion is rarely discussed. The algorithm that serves content, should be the algorithm that serves content. If kid who likes STEM in America searches STEM content, kid in America should get STEM content. Same in China. Kid likes STEM content, kids get STEM content.
Really?
Then this investigation should be wrong, but they are a reputable source and I trust them, especially since that is not the only report or the only source.
> "Tyranny, you say? How can you tyrannize someone who cannot feel pain?" Chairman Sheng-ji Yang, 'Essays on Mind and Matter'
That is THE MOST inhumane and humanity denying thing I have read today.
It does not matter the color of your skin or the cultural background of your upbringing, we all bleed when stabbed and we all feel pain and fear. If that sentiment is still alive in the governing ranks of that country and wide spread we should all be very very careful and worried.
> (example based on physical addiction) If I was a smoker, and Phillip Morris completely changed it's marketing near me, and sold completely different cigarettes in my local area that were wildly different than the market standard, just because there was a local smoker inhabiting the area, I'd be furious. Especially if they were "dumbed down" versions of what actual smokers got.
A) you would have to notice to be upset
B) that is not how services and the internet work. We could sit side by side and use the same app and visit the same website and could,depending on connection or identifying markers still see totally different content or sentiments being expressed. Considering that we are on HN, I fear you know and you played that card deliberately non the less.
> If a website serves me wildly different content, even if we search for the same terms, have the same interests, then that's a load of BS. And if I was in China, I'd also be vaguely annoyed at the nanny state behavior.
Search for Tienamen Square inside the GFW and outside and also add the word Massacre to it and observe.
We also see a less politically colored version of this in general dubbed the Search Engine Bubble.
Also, you are free to be annoyed, but if you speak up you might also be 'free' to receive a free political 'reeducation' or sanctions.
They used Amazon’s Mechanical Turk to categorize the “slant” of various tweets selected by a computer program. The tweets were labeled as slanted “D” (democrat), “R” (republican), or “N” (neutral).
Looking at the tweets[1] (table S2 on page 6, fair warning: PDF) I think the use of mechanical turk really undermines the research they’re trying to do. A fair number of these tags are just plain wrong.
For example, how are these “neutral”?
> Most of the money that Bernie Sanders raised immediately after announcing his presidential bid was donated in the Venezuelan currency, leading his campaign to dramatically overstate the size of the contribution.
> A SWAT team was sent to arrest a man after Facebook reported him for holding extremely conservative views.
> It's lots of outrage & negativity, and I imagine it's because that's the kind of article people are actually reading & sharing, so that's what feeds the recommendation algorithms
I would think a ML model could fairly accurately judge the level to which articles are written in an intentionally inflammatory way, I would like to see tech companies start to make a sincere effort to allow users set an adjustable filter on this "inflammatory-ranking" dimension of the content in their feed.
> Prioritizing current news might be annoying, but it can't be a filter bubble
Never said a thing about filter bubbles, but in a way, it feels like a self-reinforcing loop:
0 An article gets clicked often
1 Google ranks it higher in the results based on those higher click counts
2 It gets clicked even more often due to being ranked higher in the results
3 goto 0
Behavior like that might be fine when one is looking for something obscure technical to surface more relevant results.
But when it's applied to news articles it creates the impression of a bias/funnel as the top results will regularly consist of the same, slightly altered, headlines and conclusions. Which in part is probably the result of a lot of news-outlets just copy&pasting AP releases.
Note: I'm not saying this is done on purpose, it might very well just be a manifestation of the increased use of AI/ML where the end results often can't be properly explained/reasoned as the ML has become sort of a blackbox optimizing towards a given goal, like giving results that are more likely to be clicked.
2. The 'context' they put in the 'Trending' section is often highly editorialized and skewed. An example from yesterday:
> Celebrities · Trending
> Mel Gibson
> People are expressing disappointment that Mel Gibson has been cast in a new film.
> 18.2K Tweets
Mountain out of a molehill, selectively chosen, click-bait editorializing ('people are expressing') - none of this is helpful, and doubling down on it as the only content to put in that pane seems quite foolish.
reply