Another good example of resisting the tyranny of metrics is Meetup trying to avoid a runaway feedback loop in its recommendation system, as described by @estola https://www.youtube.com/watch?v=MqoRzNhrTnQ*
Using the term “AI” was also a big red flag that this person doesn’t know how the algorithm works. It’s not some sky net sentient thing trying to get us to watch more videos... it’s math and training data
Does anyone else find these kinds of Twitter threads incredibly hard to follow?
It's very difficult to contextualize and adapt to reading these short incremental bursts of text or 'Twitter threads'. It always feels like important context and exposition is missing. I seem to understand the gist, that Youtube seems to have promoted flat earth videos disproportionately, but the 'whistleblower' aspect is not immediately apparent.
In fact, the thread on Twitter itself is even easier to read than this ThreaderApp link that some people demand.
The incessant bloviating from people who refuse to learn how to use Twitter is far more annoying tbh, it's starting to sound like old people complaining about the music being too loud, and it always derails discussion.
I know how to read Twitter and yeah, this one is really hard to read. It's got all these numbers with slashes behind them scattered throughout. It's got a list of numbers with slashes in the middle. It's got random links. It's got the occasional sentence without punctuation
It's just kind of a mess.
I was a heavy user of Twitter for multiple years. I've pulled back on it lately and it's really amazing how much I don't miss the weird, telegraphic kind of writing it tends to encourage. It really bugs me that one of our major means of communication forces you to filter everything through tiny text boxes that only recently became big enough for a whole sentence, and strongly discourages taking time to actually consider the flow of an idea through multiple paragraphs.
haha, I looked at twitter for 5 min after it launched, I laughed really hard and closed the page. The thought process was something like "Oh, they've maximized unusability! I should now go and learn how to have a normal conversation.... I think not"
Then my life flashed in front of my eyes (by lack of better words) and I recalled a million lengthy conversations that pretty much made me who I am.
There was a funny video with a professor and a twitter "expert" where the professor argued it bad. The twitter fanboy kept interrupting him half way his first sentence until he got angry and asked if he could say something now. The twitter guy then said: But I already know what you were going to say. I laughed so hard. The conditioning clipped up his mind into 10 sec attention bursts then he had to talk to himself out loud again. Nothing of interest was said in the interview. The twitter guy thanked him and said it was a wonderful conversation. The professor frowned silently and looked at him from the corner of his eye. It was the best "what a fucking moron" face I have ever seen.
Brilliant writers like Corey Robin have no problem whatsoever using the platform effectively, often screenshotting longer thoughts. Sanders and AOC also good examples of puncturing, efficient communication.
the big losers are PR-types, and blowhards like Sam Harris, who are robbed of their smarmy techniques of soothing and ensnaring the reader with a lot of fluff and verbiage and rhetorical hedging and weasel wording.
when you're trying to express a thought clearly, or link to an essay, the platform works really well.
it works less well when your entire thesis is that nobody knows what they're doing, that people shouldn't take strong positions on topics, and that little people should sit down and listen to their superiors. cause on twitter the replies exist on the same level and status as famous-people-content, and so lots of emperors get exposed pretty ruthlessly
> the 'whistleblower' aspect is not immediately apparent
The second sentence is "I worked on the AI that promoted them by the billions." and he then goes on to discuss the internal workings of the algorithm. That feels like the actions of a whistleblower to me.
The numbers in the middle of the text and the un-expanded short URLs are annoying, but I'm struggling to see what else about this is difficult to follow. I actually think Twitter itself is better in this regard than this threading service, which tries to hide complexity but just ends up causing confusion.
IMO this is a bug to users of twitter, but a feature to twitter the for profit company. It encourages fuzzy communication, and usually the least charitable interpretations.
Basically it's tailored to push your emotional button.
>(If it decreases between 1B and 10B views on such content, and if we assume one person falling for it each 100,000 views, it will prevent 10,000 to 100,000 "falls") 14/
What does this sentence even mean? It's very hard to parse to the point of being incoherent.
The fact that people have gone from writing long-form essays to "twitter threads" to argue something is tragic.
That given the number of views of this content and how likely a user will fall for it, there are a significant number of people who will not be tricked into believing something not true due to this algorithm change.
Can confirm, I've had flat earth stuff recommended to me after watching normal astronomy/rocketry videos (Vsauce, Scott Manley, etc), notwithstanding the fact I am also subscribed to several skeptic channels. The AI clearly wasn't able to distinguish context beyond a few keywords.
This just adds to the general uselessness of the recommendation feature - I've seen that video five times, it's good but could you offer me something else?
Honestly, "seen that video five times" sounds like a good heuristic for "this guy might want to watch that video". "I've already seen that video once" would not be.
That being said, yeah the recommendation feature is just terrible
Unfortunately, out of the thousands of videos I've seen and the millions I don't want to see, every once in a while YouTube chooses one and really promotes the fuck out of it.
The recommendation engine is awful for me. It doesn't make me want to spend time on YouTube. It's amazing that at the same time it seems to learn how to get people addicted.
I personally do this and have been occasionally annoyed that I can't find a particular flat earth video because all the results for it are about people debunking it, even if I get super specific with the search terms.
It could be the flat earther deleted the video. I’ve added flat earth videos to my Watch Later list and then when I go to watch them, YouTube says the video was either made private or deleted.
Can confirm too. I'm watching lots of videos about a guy challenging himself to drink lots of beers as quickly as he can because he is hilarious. Guess what political videos The Algorithm thinks I'm interested in. :)
I'm discovering amazing content every day thanks to YouTube recommendations, mostly in the diy and construction / technology categories. Haven't seen a single conspiracy video yet.
Welp, I am jelly. My recommendation gets so repetitive that I had to use reddit as a recommendation engine. r/mealtimevideos worked for a while but soon youtube started to recommend the same set of videos on its top page and I havn't found the alternative yet.
Another gripe I have is that for creators who run multiple channels (and don't get out of their way to cross-promote), it is often very easy to not realise this until you manually check out their upload page and realise you missed out a lot of their content, all while the recommendation engine pushes the same couple of videos I have already watched over and over again.
The only occasion things clicked for me is when I was researching the Pitcairn island. For a few days there was a drip feed of obscure but highly relevant videos that keyword search failed to pick up. One day the stream stopped suddenly and has never returned since.
I’m becoming more and more frustrated with YouTube, because I have seen it take my stepson and my son and turn them in to bedridden sloths. Ok that was hyperbole but there is a whole new level of friction to get them to do anything around the house or even get out of the car when we arrive somewhere. Of course as their parent I impose boundaries on YouTube time. However I don’t think that YouTube has been a great influence on either of them. It’s a hard problem.
my cousins kids will watch youtube videos on their phone/tablet while playing games on another phone/tablet or on another monitor screen while playing fortnite/pc games
youtube is dangerous; just like netflix and facebook, they're designed to get people to stay on for as long as possible and obviously children are very susceptible to the design
I think this is an effect which has emerged out of our overall culture and access to media -- we've trained our brains to be need these constant dopamine hits and our attention spans have suffered.
I too have found myself watching a tech talk on youtube, and if the dopamine hits aren't coming fast enough, I'll reflexively open up a web browser and start surfing, or vice-versa. I've become aware, and try to stop it, but it is an urge for sure. I didn't do this a few years ago.
TV has stuff like Broadcast Standards & Practices, which functions to keep its content within a certain envelope.
YouTube does not. YouTube just cares about what gets people to watch and watch and watch. YouTube will happily feed you endless rants about white ethnostates if that's what meets its criteria for "creating engagement".
'Socrates famously warned against writing because it would “create forgetfulness in the learners’ souls, because they will not use their memories.” He also advised that children can’t distinguish fantasy from reality, so parents should only allow them to hear wholesome allegories and not “improper” tales, lest their development go astray. '
'The French statesman Malesherbes railed against the fashion for getting news from the printed page, arguing that it socially isolated readers and detracted from the spiritually uplifting group practice of getting news from the pulpit. A hundred years later, as literacy became essential and schools were widely introduced, the curmudgeons turned against education for being unnatural and a risk to mental health. An 1883 article in the weekly medical journal the Sanitarian argued that schools “exhaust the children’s brains and nervous systems with complex and multiple studies, and ruin their bodies by protracted imprisonment.” Meanwhile, excessive study was considered a leading cause of madness by the medical community.'
' In 1936, the music magazine the Gramophone reported that children had “developed the habit of dividing attention between the humdrum preparation of their school assignments and the compelling excitement of the loudspeaker” and described how the radio programs were disturbing the balance of their excitable minds. '
I have a few issues with YouTube that I don’t have with other things computer/screen time related.
1) I can’t police what they are watching easily. I’m not saying I want to have them be completely sheltered, but since any knucklehead out there can make content, they usually do...
2) I don’t have the same problem with my son spending 2 hours playing super Mario maker(a favorite of his) or to a lesser extent fortnite or other games
What clouds this somewhat is that he spends about half the time on YouTube watching videos about his favorite games, where he learns more about them. There is just an addictive quality that I don’t like about YouTube.
Get them some old power tools from a resale shop, a small sledgehammer and teach them to "Focus you fack!" perhaps? That and some Big Clive could lead to a fair amount of active learning with things that aren't too expensive if you can get them interested in learning what's actually happening to the point of being able to explain it.
Edit: power tools to be disassembled, not so much used
I support YouTube denouncing flat earthers because the science is clear to me: they're wrong. I think this will get slippery if they make similar tweaks for politics or social issues.
This raises the question is it really YouTube's job to stop crazy? Where is the responsibility of the viewer?
Why does the platform facilitate the branding of ignorance?
Blaming the viewer is ridiculous — repetition of propaganda is dangerous and eventually changes people’s minds. It’s unfortunate that an amazing educational and entertainment resource is ruined by dumb policy — I don’t feel it’s appropriate for my son to use YouTube independently, as it always leads you to a rabbit hole of crazy through the recommendation engine. Always, because ignorance is engaging.
The problem is not about Youtube stopping or not stopping the crazy. The problem is that: "flat earth videos were promoted ~10x more than round earth ones".
They actually promoted the crazy quite a lot more than they promoted the real one. Keep the crazy videos on but jeez, stop promoting them above everything else.
I think that if people want to watch flat earth videos, then the right thing to do is connect them to flat earth videos. I for one think flat earth videos are hilarious.
Yeah, this is part of the problem. Finding them "funny" and feeding the promotion machine.
This is how we get teenagers having to vaccinate themselves because their parents were too stupid to fall for the antivax BS
Because as inconsequential for everyday life (in general) if man stepped on the moon or if the Earth is a cylinder, it teaches people that "it's ok to ignore those nerds that say otherwise, whatever you believe that's ok"
Everyone in this threat is using „flat earth“ as a stand-in for the actually dangerous stuff eating at the foundations of democracy. Unfortunately, saying that some of those 4chan conspiracies are wrong, stupid, and potentially dangerous is deemed to be political and therefore not appreciated on HN.
There are probably a lot more videos about the earth being flat than about it being round. People who believe the earth is round probably don't make a lot of videos about it.
Basically the YouTube AI is severely overfitting its recommendations to a certain minority of ultra-users that then relay that niche-cultural-fit on to the content creators. If anything, YouTube's algorithm should be less tweaked - less accurate in its expectation of how you will enjoy a video, or other similar metrics (expected screen time predictions, expected ad revenue predictions, etc) - rather than more tweaked.
Will this happen? Well, will YouTube prefer to make less money, just because you want your son or your coworker to be less 'derailed'?
> severely overfitting its recommendations to a certain minority of ultra-users that then relay that niche-cultural-fit on to the content creators
Well, not a minority by far. If 10% of US population believes in "conspiracy theory A, B, or C," that already makes it into one of the biggest coherent, idea based social groups, and it's only natural that their statistics got locked on the signal from such a big group.
If picking up of major "psychocultural" groups is what it was designed for, it does its job excellently; and for the very same reason, it makes spread of any materiel catering to such groups very easy.
That also explains why its targeting seemingly overlaps with demographic being targeted by foreign propaganda - propaganda designed for a certain effect is always targeted at big well definable demographic groups. An efficient propaganda is a message that predictably causes the same desired effect on as many people as possible.
I actually don't think that's the story here. The story is that irrespective of content, the YT algorithm optimises for maximum viewing time, and it takes the behaviour of other people into account when making recommendations to you, to an extent that seems to cause a snowball effect once the recommendations start.
What if YouTube... didn't do that? It's already enormously successful, and could just show you other shows by the same user in the post-roll. Or videos in the same category.
YT pushed changes to optimise view numbers at absolutely any cost, because they could. But that doesn't mean they have to.
I think I’m more shocked by how willing people are to take fringe ideas seriously on the internet. I had a clear moment when new to the internet where I realized it should be taken with a grain of salt. But apparently many people never have that realization or they seemingly choose what they will believe.
I've seen it in the family. There seems to be a mentality among people from an older generation to think that if its published somewhere, then it must be true or close to be true.
All their life, they were exposed to newspapers, magazines. Suddenly their habit changed and now they were getting news from the newspaper on the internet. But it was always true or close to be true, nothing was entirely made up.
And then one day, they started getting news from friends on Whatsapp. Links from Youtube shared to them. There was never any change, it's still the internet giving them news. It never occurred to them that some what they were watching was entirely made up.
I remember my wife's mom telling us that Steve Jobs launched the iPad on Dragons Den. Had no idea what she was talking about and then she showed us a doctored video of Steve Jobs at Dragons Dens, showing the iPad to everyone. She thought it was true. And we told her it was made up. And she couldn't understand why someone would spend time creating something false on the internet. She just couldn't understand it.
The really weird thing about this is, that the same people had a few years ago been talking "the internet" down a few years ago. Now the same people believe all kinds of crap from "the internet".
I wonder if it may be our fault somehow by telling them that not everything on the internet is crap.
> old people seem to believe things just because they are published
Yeah, the dumb ones do. That’s not because of the generation they grew up in, it’s because there are dumb people in every generation. There are just as many people in the newest generations who believe anything that comes from the mouth of an authority figure.
Terry Pratchett's books satirized both the "I don't trust the government but do trust the random dude in the bar" and the "they couldn't write it down if it wasn't true" trope for decades. He wasn't the first, I'm sure, just the one that most immediately comes to mind.
Neither of these is new. The scale and velocity is different, in a meaningful way, but "the internet" hasn't changed fundamental human behaviors there.
Government positions have to be filled by people who are willing to and can get clearance. This decidedly diminishes people willing to apply and capable of applying.
Yikes, while I understand there is some harm done by the AI leading people down "rabbit holes" I'd still rather have the AI stay independent than have humans manually biasing the results. The results of the AI reveals the worst instincts of humans but a manually tuned AI can be turned into a weapon for propaganda. That would be a worse result in every way. A naturally conspiracy theorist AI can be escaped and controlled on a per-person level by simply being a logically thinking person and controlling your feed and likes while a propaganda machine that serves a political purpose cannot. That really sucks.
The danger is when the state or a political party or some other organizarion with the states levers of power controls the algorithm. It won't be flat earth videos that get deranked it will be anything that contradicts selling us trash and invading other countries.
That warship wasn't actually bombed by the enemy, they probably didn't use those chemical weapons, why is our state department funding the same terrorists we were just fighting yesterday, actually those moderate rebels aren't so moderate will become the new flat earth.
That isn't a matter of manual tuning, though. Its a matter of putting engagement metrics inside the cost function, such that the viewership is part of the training loop.
... Pardon me, but are you under the impression that most neural networks are purely unsupervised learning with no opportunities for operators to bias their results?
If so, this is is a misapprehension. Even unsupervised learning systems are subject to data based biasing. The selection of input, the decisions on how to partition the dataset, the decisions made on how to judge and measure overfitting, and potentially hundreds of other small hyperparameter decisions made by operators can substantially change the output of the system. In the case of the top post, it's very clear that "does this increase time-on-site" was an operator-chosen scoring function for the results of the system in question.
Frighteningly, these decisions are often unexamined and unrecorded. This has lead to a series of impressive-sounding systems that can neither be reproduced nor truly audited. [0]
And that's just the approximated function. The tensors it outputs are then further processed in some way, as they're often of no value in isolation. The decisions on how to use those outputs also have a profound impact on how we view the results of learning systems.
If anything, we're not cautious enough as a society about using this technology. We see all sorts of crazy in the news that's wrapped up in the over-hype. We see law enforcement failing to use even basic tools [1] because they're so excited.
You're right of course, though that's still quite a bit of distance from the deliberate political interference explained in the OP. I find business goals like increasing engagement to be morally neutral vs political goals like shaping people's views.
You don't think an untuned AI couldn't be used as the same kind of weapon? Figure out how to fake the inputs it wants, spin up thousands of virtual machines watching your propaganda, pay people to "watch" it, whatever blackhat/whitehat SEO techniques you feel like putting time and money into, and get it stuffed into everyone's recommendations.
The engine wants people to spend as much time as possible on YouTube.
Netflix’s wants the same, even though it’s not even ad based!
Facebook wants the same.
As infinitum.
This is just addiction peddling. Nothing more. I think we have no idea how much damage this is doing to us. It’s as if someone invented cocaine for the first time and we have no social norms or legal framework to confront it.
One thing you can never get back is time. It runs one way and as of this writing, we all have a finite amount of it.
Prior to YouTube, the internet and recommendation engines I used to spend hours in front of the TV. I could do this for days on end. I don't think I was alone. Now I spend the same amount of time in front of my computer only the content is 1,000X more informative and valuable (seriously, you don't want to know how many reruns of Facts of Life and Charles In Charge I've seen).
Recommendations are simply a service that makes my life easier and potentially gives me something cool to watch (they don't usually work that good, but that's not the argument here).
I think it's an interesting comment. I wonder if the average person spends less time watching reruns compared to the 70s/80s/90s. There certainly was a cultural push to not be a "couch potato", and I wonder if we'll see that again with internet entertainment.
Interesting. I used to have Netflix and discover some interesting Youtube content from related videos, but now Netflix only recommends me Marvel films I don't want to watch and removes all interesting content and Youtube has terrible crap in "recommended for you" videos. On plus side, learning to play guitar has been at least as satisfying.
Netflix isn't ad based, but they still want to encourage time on it. Every new show you watch is a potential recommendation in a conversation with someone else. Eventually parents or roommates have too many devices on one account and people break down and purchase their own. It increases revenue, just in a different way.
Native advertising, yep, explains also why browsing around the Netflix interface auto-plays everything and you cannot turn it off. That is Netflix’s internal ad space and it’s very valuable.
That’s actually the reason I stopped using Netflix.
I’m the kind of person who avoids trailers and on-the-next-episode-of spoilers, and Netflix assaults you with both of these things. The only way to turn it off is to leave.
Replace Ghostery with uBlock Origin + uMatrix, which are developed by the same person. Don't recommend Ghostery to others. Ghostery sells user data and shows ads. [0]
> Remove all notifications
Including email? There are plenty of legitimate uses for notifications.
Otherwise, solid list and I especially like your last bullet point, but don't discount the amount of information that can be gained from a good video or infographic.
I find Ghostery (in combination with uBO and uM) useful for identifying the purpose of many of the things blocked so I can selectively unblock where appropriate.
You missed the part where Ghostery is anti-user. They even went open source last March in an attempt to turn around the growing bad faith but the motives were very transparent.
Yes email. At least at work, I don't have any notifications on my desktop. I check email a few times a day and otherwise work on what I have planned for that day.
The people at work who have chat and email notifications turned on just spend their days reacting to whatever comes in.
I find that if something is urgent enough, somebody will call me or come to my office to get my attention.
I operate several workstations and some have notifications while some don't. For various sites and email addresses. Some notifications are sparse and important, some are frequent and unimportant. There's no reason not to let the sparse and important ones come through. It's efficient.
Netflix’s optimization function is different though - they want you to keep paying for the subscription. So if you spend a smaller amount of time on the platform but it’s rewarding, they win. Ad supported platforms want you addicted.
Not so sure. Netflix wants you to think at the end of the month that you got a good value for your subscription cost. So people often mentally calculate how much of their free time is consumed watching Netflix. A lot of people probably compare the Netflix cost and hours consumed to what those dollars would get on iTunes or at the movie theater. It’s why Netflix often goes for 10 hour programs instead of 1-2 hour movies.
It’s why HBO is not even a serious competitor to Netflix, even though they tend to focus more on quality instead of quantity.
It’s all about taking your time. Netflix even says their main competitor is something like Fortnite, which again, consumes a lot of your time.
Time spent matters for Netflix, sure, but the decision at the end of the month is still also a function of whether the user thought it was time well spent. If I spent all day on Netflix but thought the content was trash I would unsubscribe. YouTube algo doesn’t care if the time on site makes you crazy, and the lack of a conscious decision to sign up for the service tips the scales much, much more for addiction peddling.
I don’t think that design goal in and of itself is toxic enough to warrant the connotation of the label “addiction peddling”. In YouTube’s case, it sounds like their recommendation engine is designed to simply reccomend videos people might like. YouTube obviously wants people on the site, but no user is being fooled here; they want to watch cool videos, YouTube recommends cool videos to them. Everyone here is fully consenting and benefiting in this relationship. It’s the outliers who seem to be the problem, and considering YouTube reaches billions of people without incident I think our threshold for “this is a problem” should also change.
I recenently found my self curious about Antarctica. Put that word in the YouTube search to see what would come up. The first few screens of suggestions contained nothing but obvious trolling and conspiracy theory videos.
YouTube most defenitley does not optimize for videos I “might like”
Not knowing anything about the guts of youtube, I'd bet that it actually does optimize for videos it thinks you might like. Unfortunately, it doesn't know anything about you so it assumes you're a random denizen of the internet. Even more unfortunately, far more of the internet is 4chan than HN. 95% of the subreddits I've seen would answer a simple query about antarctica with ten pages of obvious trolling and "ironic" conspiracy-theory videos. And that's as good as it'll ever be because further improvement would require detailed personalization that they can't implement because the bad PR would destroy them.
I think YT is great for 'hobby rabbit holes' as I call them... Starting at a certain topic and exploring all the weird and wonderful content about it. My knowledge has certainly expanded all the more from YT than without... Much better than mind numbing TV on the couch.
In the article it clearly states: 'We designed YT’s AI to increase the time people spend online, because it leads to more ads'. They designed it to increase the time spent. It does not 'sound like' they designed it to 'recommend videos people might like', it clearly states 'time spent'. There is no room for speculation, unless you question the correctness of the author's statement.
You can design for more than one thing, especially when complimentary. Why is there no room to speculate that they also designed for what people want? Why do I have to question the author's correctness to do so? Why mix all these words like this and tell other people what they can or can't speculate about?
Because the commenter did not add to the author's statement, but offered a different view on how YT's AI was designed. This particular comment section comments on the article, it is not a general discussion on YT's algorithm. Therefore if the author of the article, who is also one of the authors of the AI in question, writes a very clear statement on how the AI was designed, and the commenter then writes that it sounds like the AI is designed differently, then how can that mean anything else but questioning the correctness of the author's statement.
Yes, and that's one side of the relationship. The other side is that YouTube provides a free platform for anyone to upload and watch videos, and it even suggests videos it thinks you might like. Everyone here benefits. I fail to see anything as nefarious as the term "addiction peddlers" and hard drug metaphors would imply. The problems described in the text sound like they're mostly caused by mental illness and/or lack of personal restraint, both of which are and should be outside of the scope of YouTube to solve (unless we want to start accept third parties robbing individuals of agency "for their own good").
'We designed YT’s AI to increase the time people spend online, because it leads to more ads'
Would you treat your friends like that ? Say, a friend asks you to recommend them a good video on software development. Are you going to optimize your recommendation as to make them take the longest time possible to achieve their goal ?
>Are you going to optimize your recommendation as to make them take the longest time possible to achieve their goal ?
No, I'd link them to relevant videos that helped me in the past, and then maybe link them some more later on to help them further increase their skills after learning the basics. These hypothetical videos were interesting and helpful to me, and I figure that since I found them interesting maybe my friend will too. That's what it sounds like YouTube is doing, only they get money.
I agree. A good comparison is how Coca-Cola used to have actual cocaine in it to keep people coming back for more. I think in the future we will look back on these unregulated times of loot boxes and Skinner boxes with the same sort of incredulity.
Not exactly the same, though I get where you were going with the example.
history/
“Pemberton’s French Wine Coca Nerve Tonic,” the precursor to Coca-Cola was a patent medicine and was proported to cure morphine addiction (of which Pemberton suffered from after the U.S. Civil War).
The addictive affects of the coca leaf weren’t widely accepted at the time due to the limited concentration of active cocaine in the “wines” it was typically included with. That it had addictive qualities was a side-effect, not a conscious decision to aid with repeat business.
Coke was pretty good about getting it out of their product when people realized it had bad effects- before any serious public discontent over cocaine was popular
I think humans spent the 20th Century learning how to live with the availability of unlimited sugar and fat.
I could easily consume 10,000 empty calories per day. But instead I’m eating a bag of fresh organic vegetables as I write this because I learned to resist my biological urges to gorge myself on foods designed to trigger them like chips, cookies and soda.
I think people in the 21st Century need to learn how live with the availability of unlimited information. To consume more of the mental equivalent of organic vegetables and less of the mental equivalent of soda.
Children don't usually magically learn things when not taught, especially if those things go against their biological programming.
Adults are just children who grew up. There's still no reason to believe they will just "learn" on their own.
Instead of favorably comparing yourself, with your bag of organic vegetables (my imagination is running wild picturing what that must look like) and your superior intelligence which led to self-learning how to escape sugar and fat, to the rest of the American populace who clearly need to learn, devote some of your time and expertise to actually making an attempt to educate and help those who don't know better or who are trapped in addiction.
If we don't get enough real information in the mix, then it will only be noise.
That's why so many professionals are so adamant about sharing their theories on nutrition.
It's just the unfortunate reality that there are also countless special interest groups with particular motives when it comes to spreading disinformation.
I don't think take a nutritional expert to assert that too much fat and sugar and not enough exercise is a health issue. Or that it takes an expert to recommend a primarily vegetable-based diet.
I do think it takes a certain level of expertise to start making claims about things like Keto without just repeating something you heard elsewhere, but a diet with lots of vegetables is undeniably healthy.
Don't be afraid to share the knowledge you feel confident in. Just hold back on the stuff in which you aren't confident. :)
I don't think I know anyone who hasn't heard this conventional wisdom many times before. Repeating common beliefs without adding any value (such additional evidence) is an example of what I would consider noise.
There is a clear health benefit to eating vegetables over chips and not overeating.
The effects of binging television are less clear, especially when it’s accepted in advertising that binging television is OK. Look at all the Comcast ads that celebrate binge watching.
At least with food advertising it’s been tamped down a bit. Lays potato chips used to have “bet you can’t eat just one” like it was some sort of a drug pusher.
> Look at all the Comcast ads that celebrate binge watching.
I think it's more insidious than that -at least the ads I've noticed.
There's a lot of normalizing of binge watching. Of making it a "given". When I say "normalizing" I mean things like "Since you're going to be gorging on your favorite shows, use our phone/our service/our devices to do them on the because they're the best".
It's not even a question in many ads; the assumption is that you're going to be binging already. Celebrating would be closer to beer commercials ("it's been a hard week, now it's netflix time!").
I totally agree. I see billboards frequently or messaging on the side of businesses “now you can binge better.”
When did binge ever become appropriate to use as something to be encouraged?Feels insane. I mean even binge eating vegetables probably shows a problem, it should just be part of a balanced diet.
I’m hooked on HN even though the conversations here don’t give me any pleasure and I’m wasting valuable time.
I’m clearly addicted, but does HN have any sort of algorithm designed to maximize my time on it?
I’m sure these companies do everything in their power to make their services more addictive, however we have to consider that this is just looking for a scapegoat.
I believe you raise a fair point. That is, plenty of humans have plenty of habits. We are, as they say, creatures of habit. The difference now, is the online habit is easy to identify, as well as self perpetual when we interact with others.
Even email can be habitual. Stop sending them and - sans spam and newletters - you stop getting emails.
Yeah, people were addicted to cable television too and some spent inordinate amount of hours in front of the boob tube. Is YouTube fundamentally different? Because it's able to serve more specific content to users? Maybe. Maybe some are prone to addiction regardless.
The big threat is that if ML/AI works out, they’ll become far more efficient at it than ever before. That is, we’re following a trend/path that as a society we probably shouldn’t
If ML / AI works out The Matrix will be soon to follow. It's going to be a lot easier to keep the masses happy (and passive) is they're immersed in VR.
I think the difference is the primary device of content consumption. There were TV addicts who never left the house. Now there are internet addicts and they walk - and drive - and work among us.
Go look at something such as a phpbb site where topic ordering is driven solely by 'bumps' - the most recent topic to have a comment gets bumped to the top of the page listing. You'll find they rarely have any worse ordering of content than voting systems provide. In my opinion it tends to be simply superior. It also avoids the huge problem in point based systems of requiring some 'parallel universe' /new stream that only a tiny and biased minority ever view.
But voting systems drive "engagement." One intuitively surprising discovery is that downvoting users tends to drive them to comment more, frequently with lower quality posts following. I say 'intuitively surprising' because it seems counter intuitive, yet if you've spent any time on point driven message boards, including here, you see the constant stream of little tit-for-tats as a downvoted user (or two) engage in an ever lower quality of 'debate' in some isolated thread. As for upvotes it goes without saying that that little micro-shot of dopamine is what's built, and arguably sustaining, the social platforms of today.
Bump driven systems suffer from the problem that half the posts are simply people posting the word "bump" in order to drive the parent post higher up on the list.
To its credit, HN actually has systems in place to deliberately discourage tit-for-tat commenting. First, it deliberately has no inbox to notify you of replies to your own comments, so you're much more likely to neglect to discover that someone has replied to your comment. Secondly, it disables the inline "reply" button for comments whose age is below a certain threshold, so that only power users who understand the workaround can actually reply to young comments. Thirdly, it has no support for navigating deep comments into dedicated subthreads like Reddit has, so instead successive comments will become narrower and narrower until eventually becoming literally unreadable (not actually sure if this one is deliberate, it could also be laziness, but the effect is the same).
As for bump-ordering, I lost most of my youth to phpbb-like forums, and I'd say they foster a mindset of even more obsessive interaction because not only do non-nested threads become impossible to follow if you're away for long enough, but keeping your preferred topics on the front page requires frequent monitoring and carefully-timed comments.
Just so you're aware, HN has configurable settings to help force you off the site after a certain amount of time:
"Like email, social news sites can be dangerously addictive. So the latest version of Hacker News has a feature to let you limit your use of the site. There are three new fields in your profile, noprocrast, maxvisit, and minaway. (You can edit your profile by clicking on your username.) Noprocrast is turned off by default. If you turn it on by setting it to "yes," you'll only be allowed to visit the site for maxvisit minutes at a time, with gaps of minaway minutes in between. The defaults are 20 and 180, which would let you view the site for 20 minutes at a time, and then not allow you back in for 3 hours. You can override noprocrast if you want, in which case your visit clock starts over at zero."
>I’m clearly addicted, but does HN have any sort of algorithm designed to maximize my time on it?
that upvotes are visible to the user is a big one. Hackernews (or reddit for that matter, which employs a similar mechanism), would work just as well for the purpose of conversation if your score was hidden.
But seeing that number next to your name climb up is something that drives a lot of engagement on the platform. I don't think this is intentional on HN's part, but on Reddit for example I have no doubt that the activity generated by the 'karma' mechanism is left in place precisely because it generates so much (meaningless) content.
I think I would engage in a much healthier fashion with this site if that number was hidden (along with comment upvotes, clearly invented by Old Scratch himself). But since it is prominently displayed in the header I assume it is intentional.
In this case it doesn't seem like it was as simple as "time maximization", of course everyone tries to do that.
The problem was that pathological/excessive users were overly skewing the recommendations algorithms. These users tend to watch things that might be unhealthy in various ways, which then tend to get over-promoted, and lead to the creation of more content in that vein. Not a good cycle to encourage.
Netflix may have this problem, but FB, YT and other (relatively) open access platforms definitely suffer from it to varying degrees.
Like the author, I do find it refreshing that at least some steps are being taken to identify and remediate. A lot more can be done.
Interesting article on why this is a problem, some effects that this "attention sapping" has had on the labor force and how it alters the traditional labor-supply curve, and a somewhat fun proposal to "tax media companies for the hours of human attention they consume".
> One view of the status quo is that media companies are aggregating human attention and selling it at a discount–far below minimum wage–to advertisers in a massive arbitrage on human capital.
That sounds like reheated fallacies of the Victorians about the dangers of 'idleness'. The fact of the matter is that huamn attention isn't theirs to own in the first place no matter what they think on the matter. Despite the expectations and social pressure it isn't a continuous 'commodity' to be produced upon demand and it doesn't even map in a strictly increasing way to productivity.
It brings to mind the greed and stupidity of a lord thinking that anything his subjects do which isn't earning him money is stealing from him.
Agreed. At what point do people own their own decisions? I get these ridiculous YouTube conspiracy recommendations all the time.
Sometimes they're hilariously entertaining. Most of the time they're so off the mark that I don't watch them. If Youtube really wanted to increase my engagement they would stop showing irrelevant ads and stop crippling the free YouTube app. I mostly go to YouTube for movie trailers and videos from Deep learning courses. That's it, the end.
What YouTube appears to want is for me to pay to use it and it gets in my way and makes YouTube inconvenient in every possible way to try and persuade me to throw money at it. This will never happen but hey keep on trying.
Seems like Netflix has a different goal: they want to release just enough stuff for customers not to stop paying, but don't particularly care how much they watch per day. Otherwise they'd be producing good shows. Right now Netflix 100% fails to hook me in. I watch it for exactly 30 minutes a day while doing cardio. If it wasn't for that, they'd have to pay _me_ to watch their drawn out, boring shows. The only reason why I watch Netflix is because Prime Video is even worse.
Netflix’s wants the same, even though it’s not even ad based!
If Netflix wants people to spend more time on Netflix, they should probably stop adding forehead-slapping "features" like mandatory autoplaying previews that encourage people to stop spending so much time surfing through the back catalog.
Of course, it's vitally important to Netflix's business model that you don't discover just how limited their catalog is. That's not an issue for YouTube, and it's interesting to consider how these requirements feed into UX engineering at the two companies.
it was mentioned in this topic as well, that a person watching a lot of Netflix shows would as well recommend Netflix to others, and thus generating more subscriptions.
Half the world history we spend time on reproducing for war or similar endavours. Not doing harm, is still quite a improvement. Not buying stuff while beeing alive, is even reducing the war on nature.
In the 70s - as a reaction to the club of rome, they had this insane idea to freeze part of humanity, until the imminent crisis was over. This is what the internet is- a freezer- it freezes social interaction, it freezes your desires, your plans - it freezes everything.
To become defrosted, does not by default improve the world. Its ultimatly a personal desicion.
Cuts both ways, right? For example, we are remodelling our house. It’s nice that YouTube presents me with more DIY videos on how to tile as I research the topic.
The recommendation engine only turns evil when it takes the viewer down a never-ending path into dark, negative, useless material and misinformation.
The difference is that Netflix is not radicalising people. The algorithms used by YouTube do a huge amount of damage to the psychological health of millions of people and in turn countless societies, just to earn some advertisement revenue.
You can't really accuse Netflix of the same.
> This is just addiction peddling.
In light of this problem of the radicalising of vulnerable people like is happening on YouTube, your use of the word 'just' in this way, shows that you are either ignorant to the problem or have an hidden agenda in trying to downplay it. I hope it is the former.
The "radicalisation" problem you refer to is a logical consequence of a system that optimises for maximum engagement. I don't see why the fact that someone else can see this means that they have some sort of hidden agenda.
When using 'just' in this way you down-play the problems. 'Hidden agenda' might have been a bit extreme on my part. What I had in mind was the down-playing of issues to seem smarter than others.
In the context of the Cumex fraud you could say "It is just stealing". When doing this you project that we shouldn't be concerned, it is _just_ stealing, which it isn't.
It becomes counter productive to solving the issues because when someone who doesn't know about the problems surrounding YouTube's algorithms and reads that 'it is just "addiction peddling"' they might conclude that people are overreacting.
My issue here is that, in YouTube's case, this is not 'just' attention peddling. I also don't agree with you that this is 'a logical consequence of a system that optimises for maximum engagement'. Netflix, as we have established, also wants my attention, but that doesn't mean that I go from Rick and Morty to Nazi propaganda. On YouTube you can go from listening to music to getting lectured by White Supremacists without actively seeking this extreme content out.
They have optimised their algorithm to make this transition very gradual in order for you to stay engaged along the way and in effect they have created a mechanic in which people slowly get radicalised.
It is very easy to down-play this by saying this is logical and just attention peddling, but I'd say it is closer to psychological abuse of people in their most private and vulnerable moments. All this _just_ to make money from advertisement. The algorithms might not have started out this way, but there has been enough criticism on what is happening for YouTube to finally do something about it, yet they haven't. This means it is intentional.
I think the core problem here is that you view addiction peddling as less of a problem than slow radicalisation. I don't view things in that way. At best they are similarly bad, but to me it seems like addiction is a much bigger societal problem (with a larger slice of people vulnerable to it) and therefore this problem needs to be highlighted the most.
In this context, putting "just" before it can be seen as a way to remove focus from the detail that someone else is focus on to look at the bigger problem. That is what I see here as well.
Your conclusion that it is intentional is too fast. There is a less evil interpretation of real life, and that is that it's not practical for them to implement this in a way that still yields addicted users while avoiding radicalised users. Their money is more important than a problem that a minority is vocal about; it's an unfortunate consequence, but that doesn't make it the intent.
>The difference is that Netflix is not radicalising people
Couch potato-ising people is terrible for our future too. And dismantles defenses of people who would otherwise laugh off the political indoctrination content in other media.
Education. I have learned a massive amount of history from carefully cultivated youtube videos. The messenger will always be shit. Look at a rack of magazines. The new yorker sits right next to penthouse or national enquirer. It comes down to what you want to know.
Adults are citizens that have freedom to choose. That is what being an adult is about. That's why kids need to be harbored from certain things. They lack the experience and education to choose wisely.
Putting that burden on the state or a website is losing the bigger picture. Any time I hear the word addiction it makes me sad people are saying they can't make an adult choice in their lives and need someone to stop them.
> => Platforms that use AIs often get biased by tiny groups of hyper-active users.
I don't doubt this but it's unclear to me how this is much different this is than many other systems. I think there are a _lot_ of systems that have an operating point defined by a small group of hyperactive users. e.g. The vast majority of twitter / reddit / probably even hacker news content is created by a tiny % of users. Even if you move past the content creation phase, curation is often also controlled by a small group of hyperactive users. e.g. The relatively small number of people that switch to the 'New' view on HN or reddit whom collectively decide what gets moved onto the front page where the masses will see it. It seems to me that an AI approach could be more 'fair' or 'democratic' in comparison to many of these existing systems in that it should consider the patterns of all users in some way, vs just relying on a small self selecting group to perform the same function.
To wrap back to this YouTube situation, I think the deeper question here is why we seem to have so many people willing to believe anything they see on YouTube? More concerning I guess is the general idea that if we hide the offending content that is a 'solution'. It's really just putting a bandaid on a deeper problem and hoping it goes away.
> There are 2 ways to fix vicious circles like with "flat earth"
1) make people spend more time on round earth videos
2) change the AI
There's a 3rd. It's complex, difficult (to execute and prove), and hardly ever pushed. Put the onus on individuals to be realistic, honest, self aware, and diligent. Sure, it's not easy, not clean, and will likely lose more than it wins in the short term. But ignoring it completely is just...dangerous. Continuing to ask Daddy to change the world for you breeds less-than-desirable attitudes, beliefs, and perceptions. This concerns me more than any Internet video does.
First, a lot of the people watching these videos are high-functioning schizophrenics. Asking them to be more self-aware and diligent is like asking a paraplegic to spend more time walking.
Second, how do you plan to do this? Put a banner on every Youtube page that says, "Don't trust this video"? What we're doing now clearly isn't working, and I don't know what you propose to do that we aren't already doing.
I would say that your suggestion is the way things currently are. Isolating people and condemning them because they cant escape a trap that was designed and built for just this purpose is at best narrow minded (IMO).
This is interesting and valuable information, but I can't help but be a little bitter at whistleblowers like this. This algorithm has been going for years, and these people only emerge after the media started highlighting it recently. I wish they'd blown the whistle at the time they were actually creating these algorithms.
Maybe the implications weren't clear at the time, or maybe the public wasn't willing to take the time to understand a complicated issue until the media became interested in explaining it to them for other reasons.
Maybe it wasn’t so clear to them at the time they were creating the algorithm. “People are watching a lot of this, they must like it, let’s recommend it.” is quite a logical thought.
The harmful effects may only be becoming apparent now. I for one never thought about this before.
Well the thing is that it isn't even clear now that it counts as something to blow a whistle upon. There is literally nothing illegal about what they were doing despite the harping of every legacy player.
It’d be a nice “feature” to be able to disable recommendations after watching a video. When a video ends, it just ends. If I want to watch something else, I have to explicitly search for it.
I know it’s the exact opposite of a good monitization scheme but it’d be a good step in putting the power back back in the hands of the user or parent of the user.
The unintended consequence of creating an addictive popularity algorithm is that people who are predisposed to informational addiction will end up dictating the contents. Or, in English, everything turns to shit.
and what about “conspiracies” that end up true? like that the gulf of tonkin was a false flag? or that iraq doesn’t have wmds? (it was considered a conspiracy at the time)
TL;DR: the Youtube AI overweights heavy viewers (people who have more "engagement" with YT) and thereby ends up naturally promoting the Youtube equivalent of clickbait. YT creators notice, and make their videos more and more like Youtube-bait so that they'll be promoted. A tight feedback loop is created, and hilarity ensues. Pretty much the exact same dynamics as "Elsagate", except optimized to bait adults, not kids.
It's optimized to bait a subset of adults who are heavy users of YT. I have to wonder how good the results are for Youtube as a business, overall - have they ended up optimizing for a small group of users at the expense of catering to the larger majority?
Ultimately, whose fault is it that a user watches a "conspiracy" video? The article seems to consider the user base as mindless people with their mouths open to be spoon-fed content. Do the viewers have no agency?
In that thread, why is the solution to curtail "conspiracy" videos? What defines a conspiracy theory? I mean, we all seem to agree here about vaccines and moon landing, but what's to stop YouTube from labeling a political party as "conspiracy" and filtering them out of existence? Can't the users choose for themselves, or are they too "stupid" to pick what's best for them?
Yes, YouTube is free to filter as they see fit... but if they want a recommendation engine, AND they want to filter out "bad things", it's a very valid question to ask who decides what is bad. I believe it is essential for YouTube to be transparent about what their algorithms are designed to filter out to avoid this scenario from happening.
Part of the the solution that no one seems to be talking about is to caution others against passive consumption of media. YouTube et al want users to plug their ears, close their eyes, tilt their heads back, and consume. YouTube can condition their streams to be as "healthy" as they want, for whatever definition of "healthy" YouTube chooses--but there is no way to healthily allow yourself to be fed a diet of things you don't pick, even if it looks good.
"Forcing" YouTube to do this will simply result in the lame, ineffective warnings you see at casinos. "Remember! Mindlessly consume responsibly! If you need help, uh... throw away your computer I guess!" This needs to be done on a human level, thinkers to parents, parent to children. Not a legislative level.
Anyone know anything about the technicality of the change?
The Facebook content filtering end of year 2018 report talks about engagement going up the closer to a policy line (any policy - hate speech, nudity etc) content is. And that applies even if you shift the policy line.
It says they are going to change the Facebook Newsfeed to downgrade content which is near to a policy line, with a reverse curve to the extra engagement it gets for being close.
I hope the YouTube change is going to be similarly fundamental. Really curious what it is and hope it is a good one.
> explains why the bot became racist in less than 24 hours
Eh, the connection is tenuous. Sure ai becomes biased by the users who use it the most, but then you really still have to stop and all yourself: why is it we are taking about conspiracy videos instead of how people spend millions of hours watching cat videos, or memes?
There is a lot to say about the topic, but the answer really boils down to the fact the recommender systems are just good at working with variables it has access to. I don't know of any AI that has access to: "this person murdered someone" as an input. So literally, how should an AI be expected to optimize against that?
Manually tweaking the AI based on things in the real world makes sense, I'd just wish the author didn't have have it and mention Tay which is not really the same issue at all. Tay is an instance of deliberate manipulation, but YouTube is a case of wrong incentives.
Hmm, regardless of whether that works, that makes me wonder if it would work for the vicious cycle that is "popular things getting more popular because they are already popular".
It would probably lead to some interesting delayed feedback effects where things that are nearly popular get recommended, push out the popular, then the displaced stuff gets pushed into the nearly popular zone, etc. Perhaps some popularity + cooldown system would work.
Of course, all of this goes against the short-term incentives of maximizing engagement, so there's no way YouTube will ever implement this (even though it would probably greatly improve diversification of content).
I'm excluding the most active users, not the most popular things, so I think it would behave quite naturally. It just wouldn't have input from the obsessives. I agree it's unlikely to happen though.
Ah, yes. I misread. Yours is fundamentally simpler (that is a compliment) and at face value it indeed seems like a good solution. Outliers distort data, after all.
The system I thought you suggested in my misreading seems complimentary to your solution though.
> Tay is an instance of deliberate manipulation, but YouTube is a case of wrong incentives.
I think it near-certain that YouTube is both. Considering the obvious value of getting your video into as many people's recommendation streams as possible, there's surely a whole "SEO, but for YouTube" industry out there, working furiously to game the system.
Anecdotally (albeit not "an industry" per se), on a number of occasions I've observed... let's call them "radical activist groups", organising off-YouTube to flood a favoured video or channel with activity, with the explicit goal of causing YouTube to make it more visible to "the masses".
> why is it we are taking about conspiracy videos instead of how people spend millions of hours watching cat videos, or memes?
Because we're begging for oversight. It is clear reading comments and articles day after day. We also highlighted alcoholics, communists, druggies, terrorists, etc in the past. The vast majority of people that, say, flew airplanes without consequence didn't matter anymore. We take the ones that validate our assumptions, appeal to emotion, and get oversight. We, the meager, need protection from the bogeymen.
The real lesson learned: rather than actually fixing the problems with your algorithm, just manually block stuff out...
People do the same thing too with chatbots. If you have a text file with a list of bad words the bot isn’t allowed to say, it’s not really cutting-edge AI is it.
I hope YouTube (and Google as a whole) are applying these changes to all content. If they are specifically altering the behavior for only certain content (e.g. flat earth videos), I think that raises some serious red flags. I don't want YouTube to be the arbiters of what people watch based on their compass, which is going to have an obvious bias.
The reality is that almost all online platforms are driven by user behavior. If those user behaviors lead to relationships between videos that drive recommendations, then that should be respected. Who is YouTube to say that's bad? However, if the quality of recommendations is skewed by one group of heavy users (i.e. it is being gamed), they could consider tweaking how much weight is given to such users and apply it across the board. That is, they need to apply such changes not just to 'conspiracy theory' videos but also to things like political videos from the left (and the right), which surely experience the same skew in recommendations from a minority of users.
An aside: some comments here seem to say that YouTube should keep this content but 'stop promoting it'. Well not recommending content is in effect the same as censoring it outright. If an order of magnitude fewer users find some content, the net impact is the same.
> two years ago I found out that many conspiracies were promoted by the AI much more than truth, for instance flat earth videos were promoted ~10x more than round earth ones
is that surprising? why would anyone want to see a video about how the earth is round. conspiracy video is much more interesting, even if you don’t believe the conspiracy
Do we even have a word which is the opposite of educating?
Most normally agrees that wide spread, accessible, quality education is a key part of having a democratic, free and wealthy society.
We've started to see some online platforms doing a sort of reverse education. What is that?
It seems the idea itself of education comes from the philosophy of enlightenment.
I guess education is always the promotion of ideas and instructions. So in this case, we are seeing that the ideas and instructions that lead to democracy are not being educated properly, and other ideas are being promoted more heavily.
So the question is, what ideas and instructions should we educate people on? Should the algorithms be biased on purpose towards those? Should it be ideas of enlightenment emphasizing reason and individualism rather than tradition?
In this way, should education be regulated? Like should society protect itself against improper education? Who defines improper?
YouTube is proving itself to be a great educator. Very effective at it, but it seems its teachings have been controversial. They're not the teachings that lead to our free democracy in the first place.
Personally, I think we need to protect what we achieved, and we have a duty to control education so that it reflects the ideas and instructions that worked well in the past to get us where we are in terms of democracy, freedom, wealth, human rights, etc. But I also recognise that whoever you give this power too could abuse it. Not an easy problem we're facing.
Seems like the beginning of the AI overlord trope. Where what the AI chooses to do to satisfy the goal you gave it backfires. Like enslaving us for our own good.
Question for those in this thread who want to see 'conspiracy theory' content either not recommended/promoted or censored outright: do you also want the same thing for videos that support your own ideologies? For example if you are a liberal Democrat (as I imagine most of us here are), do you question YouTube's recommendation engine when you click and follow video recommendations towards videos you hit 'Like' on? Aren't hyperactive users skewing recommendations by promoting fringe political views (e.g. socialism) in those instances?
To be clear, I don't want to turn this into a debate of left versus right or about what is fringe and what isn't. If there is a problem with the quality of recommendations on YouTube, I believe it exists more broadly and not just with 'conspiracy theory' content. My point therefore is to question whether this recent noise about YouTube recommendations is really about the algorithms or if it is just about the power dynamics between groups of people who have different opinions and want to block ideas they disagree with personally.
After all, isn't it just a vocal minority of hyperactive Internet users who are complaining about YouTube's recommendations and fomenting Internet outrage about it in the first place?
Everyone carries a set of basic assumptions about how the human system works, like what's acceptable public behavior, how the government should be run (which affects who you vote for), expectations about how certain people will act, etc.
These assumptions are based on the information we take in each day, like articles read, images viewed/scrolled past, and so on. We ARE our media diets, whether we want to admit it or not.
Our opinions are formed slowly, they change slowly, and there are several well known "bugs" in human thought patterns that make us tend to prefer an echo chamber and reject information that doesn't line up with held beliefs.
AI enables fine-tuned control of exactly what assumptions people gain and maintain. What happens if YouTube silently starts recommending PragerU videos to all the millions of high schoolers that match the "impressionable" profile? What would that do to the basic expectations of the citizenry about what the "right" kind of government and tax system is?
What if the platform was used to convince everyone that annual slavery reparations must be made in perpetuity?
No one wants to admit that ads work on them, but we all know they work in general. So how much of what you know is true, was fed to you? In a world where everything we consume runs through relatively black-box algorithms (black-box to us outsiders, anyway), how much of our knowledge and beliefs are our own?
Guess I'm getting myself down the rabbit hole here. There's no easy answers anywhere. I think it's only a matter of time until we get our own Ozymandias.
>if YouTube silently starts recommending PragerU videos to all the millions of high schoolers that match the "impressionable" profile?
As someone who very rarely aligns with any of Dennis Prager's political dogma, that would be so much better than the "Top 10 Reasons The Earth Might Actually Be Flat" type that dominates reccomendations now, albiet less profitable.
We should of course expect YouTube to behave in a way that prioratizes the benefits to them financialy, etc.
Like you said there's no easy answers anywhere. Understanding that all of YouTube's reccomendations are effectivey silent, programitic , and optimized for profits that get calculated on industry stardard, necessarily shallow engagement metrics like clicks and views is a helpful start but seemingly uncommon knowledge.
This post should also make people realize how powerful youtube and tech monopolies became.
By changing their recommendation algorithms, they can change the way a whole society thinks! Just look at the recommendation from the blog: "Recommend more round-earth videos".
So far, this extreme political power has been used without the platforms managing it, but it would only make sense for all of those platforms to use it in their advantage.
All the more reason to bake user control into the legal-and-technical architecture of systems like this with structures like the Platform Coop. Communities impacted by media platforms need levers of control over them.
I would like for these centralized platforms decentralized re recommendations. I can picture an API that anyone can use, that would enable third parties to design recommendation algorithms for YouTube. Right now, YouTube is in charge, because they have all of the user data, so you have to go with their algorithm. A sort of "marketplace" for recommendation algos -- some explicitly marketed towards different genres, personality styles, politics, etc.
Right now it's just a tremendous black box, although it works well most of the time ime.
What if I want to watch conspiracy theories and I want them to be recommended to me?
How youtube decides what is conspiracy anyway? and how do they decide that I don't want to watch them?
Doesn’t this come back to the capability to think critically and act with self control?
The internet is full of bullshit, so is the rest of the world.
If you walk down the street and do as every sign, store, person or organisation desires of you, you’re going to run out of time and money before you make it more than a couple of blocks.
You will live at the whims of those who are determined to influence you.
Why would we ban conspiracy theory videos when we have churches in every town? Some ideas require proof and others dont?
We are responsible as citizens for pushing the political regulations for the dangerous consequences(AI's effects). It seems that politicians won't do it because they are interested in all this non-sense theories, it makes easy for them to introduce all kind of sneaky ideas that will benefit their agenda. They don't want us to think clearly about the power they have through technology and services being deployed continously. They won't do it at least we coerce them to do it.
The less shitty recommendations YT makes, the more users move to other shitty sites with shitty recommendations.
That’s why the problem (which doesn’t exist) can’t be fixed that way (and that is also why these philosopher kings tend to be globalists and statists - the only way make make sure no citizen is left behind, regardless of their opinion)
What about instead of playing a pre-roll ad, they instead start playing some "warning" video saying that some of the videos users will see (not necessarily the video after the ad) will sometimes try to deceive and/or confuse them? Maybe citing some example titles/videos. And of course, the "warning" video doesn't have to be played every time; just occasionally. This might generate some kind of awareness.
The AI can work by selecting the right "warning" video for a given audience. Telling this to a 65 y/o is not the same that saying it to a 20 y/o.
I do find YouTube’s recommendation algorithm prone to low quality contents. Sometimes I click one or two clickbait videos out of curiosity, and as soon as I return to homepage I would find alike videos starting filling my feed. I have to manually unlike them to force the algorithm back to the right track or there will only be more.
Now I simply disable YouTube’s watch and search history and almost never care about what’s on the homepage.
"Platforms that use AIs often get biased by tiny groups of hyper-active users."
I think nearly everything in life gets biased by tiny groups of hyper-active users. For example, one story about a razor blade in an apple, published on the news for a week, leads many to think trick-or-treating is now a dangerous activity.
For me, a key thing is to limit exposure to ALL media. It's ALL biased in one way or another. Learn to trust your actual experience as much or more than what you read and hear from others.
Another good example of resisting the tyranny of metrics is Meetup trying to avoid a runaway feedback loop in its recommendation system, as described by @estola https://www.youtube.com/watch?v=MqoRzNhrTnQ*
reply