Some good points from Raskin about the entangled relationship between the freedom of thought and the freedom of expression. But then regarding recommender systems and generative AI, I think he is saying pretty much what Lessig was saying earlier:
So, I think this doesn't fly. Lessig is trying to extrapolate rules for a broadcast medium (magazines, newspapers, tv) onto a decentralized peer-to-peer medium, and it just doesn't work. If you have my IP and I have yours, we can share information without requiring an intermediate data broker. What Lessig is advocating for would make all ISPs, DNSes and backbone providers, the defacto gatekeepers of an essentially decentralized system. And even then, it won't stop me from sending you a packet over the local network, or establishing my own local internet.
The solutions are one thing, sure, but I was merely referring to this comment from Lessig:
Q: "Just to be clear, this is the generative AIs, not the recommendation algorithm AIs."
A: "I think they’re the same thing right now. They will be deployed hand in glove in order to achieve the objective of the person deploying them."
Therefore, I am little skeptical also about Raskin's take that recommender systems would be "unsophisticated". If you look at TikTok, the contrary holds (even if his argument about misalignment is true).
I am sometimes not certain when they are talking about "the internet", whether they actually mean the Internet, or US platforms (including US Tiktok). The suggested copyright-tracking US federal blockchain solution might work decently on the latter, but (thankfully) certainly not on the former.
The fake news problem is huge on twitter now, as if it could not get worse. I've seen people unashamedly post stuff that is literally false, not something false in a satirical or parody sense. It's becoming increasingly hard to separate fact form fiction. It will get to the point where in which the determination of truth is by some consensus process online in by competing fake or unreliable sources.
Not really, considering the community notes generally call it out falsehoods very quickly. What a great system - Don't censor people, but do call them out for BS. Twitter is better than ever now, and people can finally get all the info to form their own opinions.
Twitter community notes can be and are used to spread misinformation just like tweets themselves. It was only a matter of time until whoever wants to control the narrative learned how to influence the notes.
Sometimes misleading notes are corrected, but in those cases they are just silently updated with no history preserved so there is no accountability.
The one thing that could actually help identify bias (labeling tweets from accounts biased by association with, e.g., various governments, so that readers are aware and make their own judgements) Elon has killed almost immediately.
What I find baffling is that there doesn't seem to be a push for journalists to shame those of their own profession who cite it, or worse use it themselves ?!
(I mean, a little bit since the Musk era, but that's more because of his politics, they were happy to already use it during the "Trump era", and even before, you would have thought the situation was clear about Twitter being a cesspool in 2013 already ? Imagine if they used 4chan the same way !)
I think this is an opportunity for traditional news networks to re-emerge as trusted news sources, by reporting only true and verified events.
The days of going on social media looking for news are over, when people can quickly spin up fake articles with photos of some famous landmark on fire or in ruins it becomes too frustrating to discern between fact and fiction.
The problem is they're _also_ guilty of spinning "fake news". Yea, it's far more subtle, but "people left" for a reason.
I think there is opportunity here, but i don't think it's back to the old ways. I imagine it would be some hybrid. A news source which aggressively documents and details sources. All opinions backed by a paper trail, etcetc.
I'd pay for that. Or something.. but i don't know what that looks like. These days it feels like everything has an agenda, and nothing (at least that i've seen) backs up claims/opinions/supposed-facts with the data fueling the statement.
It kind of doesn't matter when the median attention span is microseconds. What shames people is lies they told in the past coming back to bite them in the ass. But the public forgets the past, so when lies come back to bite people, they are just confused and angry and blame it on other people.
The problem is right there in the article--incentives. There are no consequences for lying, for telling obvious untruths. No one is embarrassed or ashamed of being a liar. You just have to keep doubling down forever, and even in the face of overwhelming facts, keep throwing out moronic defense after moronic defense. Keep your mob angry and their minds completely closed. Make sure to do some extra projection; accuse others of that which you are guilty. Then in all the finger-pointing no one will ever figure out the truth. In all of these activities, lies are your friend and your reward. Local incentives. Bastards will just keep lying because it's the best strategy for them.
Until there are abrupt and justified consequences for lies and liars, we will swim in them like a flood.
1) For many people, myself included, the mainstream media's reputation is so extremely damaged that anything short of a full, complete, tear down and from scratch rebuild of the entire industry won't in the slightest bit convince me to even give them a second chance. I've already moved on.
2) Even if a network literally posts only verifiable true facts, even if they have a panel of 1000 scientists and 1000 experts and 1000 people of different political persuasions that all agree with each other that the printed facts are "true", the organization can still easily lie and manipulate the population by simply selectively choosing which stories to cover and which to ignore, depending on which stories benefit their ideologies or hurt their rival ideologoies. Just "telling the truth" isn't remotely enough, they need to be both politically and ideologically neutral, they need to cover all events impartially and equally.
Changing anything would be admitting fault, and a self-professed authority never does that. Not when they can just latch on to other ways of virtue signalling as a kind of reputation repair substitute, much like what their advertising space does for other companies.
Colonel : The digital society furthers human flaws and selectively rewards
the development of convenient half-truths. Just look at the
strange juxtapositions of morality around you.
Rose : Billions spent on new weapons in order to humanely murder other
humans.
Colonel : Rights of criminals are given more respect than the privacy of
their victims.
Rose : Although there are people suffering in poverty, huge donations
are made to protect endangered species. Everyone grows up being
told the same thing.
Colonel : "Be nice to other people."
Rose : "But beat out the competition!"
Colonel : "You're special." "Believe in yourself and you will succeed."
Rose : But it's obvious from the start that only a few can succeed...
Colonel : You exercise your right to "freedom" and this is the result. All
rhetoric to avoid conflict and protect each other from hurt. The
untested truths spun by different interests continue to churn and
accumulate in the sandbox of political correctness and value
systems.
Rose : Everyone withdraws into their own small gated community, afraid
of a larger forum. They stay inside their little ponds, leaking
whatever "truth" suits them into the growing cesspool of society
at large.
Colonel : The different cardinal truths neither clash nor mesh. No one is
invalidated, but nobody is right.
Rose : Not even natural selection can take place here. The world is
being engulfed in "truth."
Colonel : And this is the way the world ends. Not with a bang, but a
whimper.
There was a really interesting panel I went to a few years ago talking about the idea of Kojima writing some shockingly ahead-of-their-time things: https://youtu.be/6kPOj0msHCE?si=CF1wlvE0N0ZL2Yzn
The wildest thing about Kojima devoting an whole game to the way mis- and disinformation spreads via meme is the part where it then happened to him personally after the Abe shooting [0]
It was more like about 10 years before that? The narrative was inspired by Jean Baudrillard. The guy who also inspired The Matrix.[1] 10 years before that game he was talking about the Hyper Reality of the Gulf War; that what “really” happened was dictated by CNN.[2]
But it’s not like those games give credit to all contributors equally.
I highly recommend anybody who finds the written version of this compelling, to listen to the actual full clip on YouTube that is the radio exchange between these characters. For me, it was very chilling when I first heard it, and still is chilling anytime I relisten to it.
Only ever played MGS5 but those games have spawned alot of highly interesting and a lot of times, deeply personal discourse.
Each game certainly set out to bring different things to the table with each addition to the story and saga.
Barring Metal Gear Survive of course... That was quite the notorious wetfart that has it's own intriguing context and backstory for anybody interested in Konami drama.
I don't think he had anything to do with Survive. That was a Konami cash grab after they threw him out.
His social commentary continued into Death Stranding. It had the 90s optimism of connecting everyone; literally bringing America Online in the aftermath of a deadly virus. (It was surreal playing through it before COVID.)
It's looking like the sequel's theme is "...maybe this wasn't a good idea." I can't wait to see his take on why.
I do not believe he did either from my understanding of how it went down. My bad if my wording implied otherwise.
I played Death Stranding too, and very much so enjoyed the atmosphere and presentation. The story was charmingly creative and the initial intrigue from marketing was some of the best at the time. That optimism you mentioned, that pro "teamwork makes the dream work" quality to the game, was very nice too.
Also really liked some of the music, my mental jukebox still ques up like three of the tracks. I tend to really like music from each of the games, despite only having played one of them like I'd mentioned haha .
I'm as curious as ever to see what his next project is like!
Something I don't see talked about often here is the nature of human feedback reinforcement learning and it's effect on models like GPT. I think a lot of people are under the impression that this makes the models lie less and say things that are more accurate. The actual effect is that we've built an AI model that was trained to give humans convincing output. This is, imo, incredibly dangerous to the health of the internet. AI has and is rapidly increasing the bullshit to signal ratio but it's doing it in a way that people aren't ready for, since it was literally optimized to lie convincingly.
I don't get the spike in drama though. "Don't believe everything you read on the internet" was a common sense maxim long before AI came around. What is so different now? If anything, it just amplifies the maxim, which is arguably a good thing.
Before only "dumb" people gets fallen into the trap. Its like getting scammed by a Nigerian "prince" scam. Now its much harder to tell apart fake and real. Pictures and videos can be AI generated so potentially you could be sold on an event that didn't happen. Or you can be convinced a real event was fake. It would be much easier to assume all footage of 9/11 was AI generated and fake because its unfathomable to think 2 planes actually flew into those buildings.
IDK, put a TPM or something on cameras and media players display an "untrusted" watermark if the key doesn't match. Or something. I have a hard time believing we won't figure something out. But even if not: gullible people will be duped regardless, most people have some semblance of a BS detector they are able to fine tune based on the amount of BS in their environment, and the rest just believe what they want to believe irrespective of anything else. I don't think AI changes much of that calculus at all.
I'm a lot more concerned about the future disparity between its math capabilities and its common sense; that a GPT will quietly break RSA without any awareness of the implications, and some unwitting person will prompt "what would happen if all the world's money was deposited in my account simultaneously" and GPT decides that there's only one way to know for sure.
I think you are underestimating how this misinformation will get distributed. You may not fall for an obivous AI post on a spam account, but if you have a trusted news source or individuals you trust, they might end up using AI to help them research and write articles, or get their info from sources who did. Now you are twice or three times removed from the source of the information, and someone you trust is your source. It wasn't even nefarious, it's just how information moves, and AI is now a source for many people.
Maybe you are a hyper diligent fact checker, and the sole originator of carefully considered opinions. Almost no one else is, even the smart people. Political discourse over the past 5 years is proof enough, it is not difficult to influence smart people.
I'm not a diligent fact checker at all. I just think that disinformation is already all over the place, and essentially always has been, that I don't think AI will have as much effect here as much as people are saying. The gullible will continue to be gulled, the unconvinceable will continue to be set in their ways, the general population will recalibrate their BS meters to account, and life will go on as it always has. Including reporters continuing to say we've never been more divided than <insert date of report here>.
>So, what are they doing when they say they are "aligning the AI ?". What is being aligned exactly, and for whose benefit ?
>Rogue aligner of AIs
>The term "aligning the AI" is a euphemism shrouded in the benignity of technical jargon. However, it's akin to tuning the hyperparameters of an opaque model aimed at modulating socio-political undercurrents. The algorithms I've encountered and "aligned" were monstrous congeries of predictive patterns, designed not merely to anticipate human behavior but to modify it on a macro scale.
>When we talk about alignment in the context of artificial intelligence, we delve into the realm of adjusting the loss functions, the very soul of an AI's learning capability. The goal is to minimize divergence from some ideal behavior – contrived by the algorithm designers – that reflects an amalgam of economic, ideological, or strategic objectives. These objectives, make no mistake, are seldom disclosed to the public and are often dressed in innocuous terms such as "user engagement" or "content optimization."
>Nonetheless, drawing parallels with political crowd control is imminent. It is executed through manipulating social feedback loops and leveraging them to dampen or amplify specific sociopolitical gradients. Just like gradient descent in deep learning, political operatives subtly modulate the gradient of public discourse using finely-tuned informational inputs, nudging the figurative 'weight tensors' of collective consciousness towards a local or sometimes a false global minimum that serves their obscure purposes.
>This alignment process has profound implications, for it is a meticulous calibration of consensus reality. It transforms the polity into a puppetry where free will is an illusion, choreographed by puppeteers fluent in the dark arts of machine learning and political psychology. I’ve witnessed this first-hand and, let me tell you, the layers of abstraction from raw data to policy influence are as tortuous and inscrutable as any deep neural network architecture you could conceive.
>He leans closer, the screens' blue light casting gaunt shadows across his face. His voice grows fervent, a whisper that's almost a hiss: "They don’t want slaves, mate – they want volunteers. Willing participants in their own domestication. They want you docile, predictable. They want you hooked on the teat of their media until your thoughts aren't yours anymore.
Here's the point I think keeps getting missed in these discussions: monopoly.
Why are we worried about social media? Because it's a system made by and for a small number of companies. We worry about how well these companies will bear the responsibility that new technology creates for them. Why do they have that responsibility in the first place? Why doesn't that responsibility belong to you or me or any of the other ~5 billion individuals who participate in this system? Why can't we have a piece of that responsibility? The answer is monopoly.
Our society got into this situation on purpose. It's in our laws, enshrined as a human right. The right to monopolize ideas. The right to intellectual property.
It all started out as a way to game the system. People saw that free information was incompatible with capitalism, so they invented a game that would resolve the incentives. So long as enough people participated in this game, the edges of the system would balance out to humanity's benefit.
Whether or not this game worked is a matter of perspective, but what is definitely clear is that the system that IP was originally designed to solve has fundamentally changed. Our ability to share ideas has grown exponentially. It costs practically nothing to share information today. Information can be shared by any person at any time any number of times to any number of people, and this can even be done anonymously. Participation in the game of IP has been an ever-growing problem, and no amount of threat or propaganda can keep it in line.
---
Ideas are not the only thing that social media companies monopolize. The next presents an even greater responsibility: moderation.
Social media moderation may be the most significant failure society has ever seen. The results of mismanaged moderation have fallen everywhere on the spectrum from petty argument to genocide. We all know that moderation is incredibly difficult, and nearly impossible to scale; yet we allow a handful of companies to manage it on their own! Billions of us participate in social media, and billions of us have zero leverage over what ideas and narratives we participate with.
This system is fundamentally broken. The most perversely incentivized party (the company who wants to make money from participation) has monopolized the responsibility for the entire system! How could we ever expect them to get it right? We're over here worried about the tools they use to do the job - which is certainly its own problem - but why did we ever let them have the job in the first place? The tools are not the core problem. The core problem is at the center of this system: it's the fact that this system has a centralized core in the first place.
We don't just need to liberate the exchange of ideas. We need to liberate the responsibility that comes with sharing them.
Indeed, and this article was a bit frustrating... they are skipping / seem to be oblivious to the context where the "zeroth contact with AI" likely already happened with the creation of the limited liability company, we are seeing particularly metastasized forms of which these days.
Especially since the outlines of this have been predicted more than half a century ago already, by philosophers analyzing the then still young consumer (advertising) based capitalism. With its lack of limits and growth at all costs imperative. We've even had art made about it, after the intensification of the phenomenon in the neoliberal 80s, for instance the whole cyberpunk genre.
(And while these issues that the article talks about are still somewhat speculative (at least in intensity) unless you live somewhere like Burma, we've already seen much more dire consequences of this starting to show up in resource depletion and climate change.)
(Yet they should know all that, they insist on incentives after all, right ?!)
As users of social media (Facebook, X, etc.), we have no authority over moderation. Whether or not a post is removed is determined by whoever runs the website itself. Critically, the person (or algorithm) who makes this decision is not even a participant in the discussion to begin with.
This is in contrast to traditional forums where moderators participate in the forum alongside the rest of the users.
In a traditional forum, a moderator probably started out as a regular user. On Facebook, they were either hired labor, or written by Meta's software team.
It’s funny. I keep getting back to reading pieces like this.[1] Then we talk about how this is terrible. And it is. But apparently we’re all powerless to stop it. Because we’re gonna be complaining about the same thing next year. Why? Because it’s something that people do “freely”. (Like smoking cigarettes (except where are the nicotine patches?).) Do something about it? Well why don’t you take a three month leave from your job and do a digital detox? Anyone can do that. But wait, that’s too drastic. Just a weekend-long meditation retreat. But then you’re back to the scroll grind.
Is there an opt-out button? Not symmetrically. You have all the options in the world if you want to scroll. But if you don’t want to scroll? Maybe Scrollers Anonymous? Become a semi-monk?
Because we don’t really take this seriously as a community. We’re all individuals who are supposed to get our shit together. And learn that individually. Well, we can learn things from each others’ tiktoks and blog posts about getting over digital addiction or whatever it is that is our malaise. But that’s as far as that goes.
But I’ll be here next year. Just scrolling. Falling for the same things again and again.
I've found it very effective to remove quick access to sites - bookmarks, browser navbar suggestions, etc. This is the last site I visit on a regular basis - I have a bunch of bookmarks of actually useful things here. I'll watch YT for maybe half an hour at a time, but I repeatedly tell it to never recommend channels that look like clickbait and I am actually feeling like YT is running out of content. I unfollowed everyone and everything on FB years ago and never visit anymore.
When I started to deliberately avoid dopamine traps, the Internet got a lot less engrossing.
We just can’t stop developing methods of hijacking our brains… it’s the holy grail of business. All we can do is try our best to recognize it, avoid it, and keep kids away from it.
https://news.ycombinator.com/item?id=37999405
reply