I addressed science fiction in my comment. The crucial difference is taking it seriously and actually trying to do something tangible about slowing/stopping AI progress, which is not something Stanley Kubrick did.
Fiction is much more than just entertainment... science fiction has been pivotal in both shaping and preparing our society for changes caused by technology. Much of the best science fiction has been so prescient that it's hard for modern audiences to even understand that these things were written before the world was like this. When I share a lot of old sci-fi with my young son, he finds it unremarkable, as they seem to simply be mundane stories about our present reality. Authors like Vernor Vinge wrote about AGI risk in stories because they were personally worried about it, and trying to share something they felt was important with others.
You can easily disagree with the warnings and fears shared in particular sci-fi, but to dismiss fiction as categorically irrelevant to our reality is just ignorant.
There is some important history here your comments suggest you are unaware of. One reason AI companies talk so much about AI risk despite it being bad for their bottom line isn't marketing hype, it's because many people in those companies are genuinely afraid. One could argue that this is because many of them have been exposed to the rationalist community- which could be (uncharitably?) seen as a doomsday cult obsessed with AGI risk. The founders of many AI startups including OpenAI were heavily influenced and active in this community.
So which propaganda are you specifically trying to ban?
>It's unlike it because it's far easier to establish emotional connections and paint a rich world full of counter factuals. Fiction can easily establish itself as a reference point for future ideas, such as on AI.
The thing about rich worlds full of counterfactual events is that, being completely unrealistic, they have no influence on real life. There's a reason the people who accomplish big things in real life usually spent a lot more of their time reading textbooks than novels.
If you want to raise artificial intelligence as an example, the very ridiculousness of the body of fictional evidence from which to generalize has probably set back the field by decades, because people keep running around chasing nonsense ideas out of the sorry imaginations of philosophers of mind and science-fiction writers rather than following the obvious research leads that stare into the face of anyone who bothers to actually look with a clear head.
But I'm sure that anyone complaining about how speculative fiction is dangerous would probably say the opposite: that giving people bad models to work from somehow makes them more likely to discover real, working science.
Anyway, point being, the thing about having movies where pandimensional posthumans open wormholes to save mankind from omnivirulent crop blight is that they don't make space exploration dangerous.
By the day I tend to refer to people who are professional AI ethics types.
Not sure what you're getting at with the rest of your comment. I would also classify most science fiction as pretty solidly out of touch with what reality is going to look like. It shows us exaggerated aspects of what writers think the future is going to look like, not realistic predictions.
The media tends to push binary stories of humans vs AI. Science fiction, on the other hand, has long explored the complexity inherent in human, AI and cyborg interactions.
Science fiction is a great reference point for reality.
The people creating our reality today all read the same books I did as kids.
Humans make lousy governors and suck at minimizing mass suffering.
Barring I have no mouth and I must scream scenarios or Butlerian Jihad scenarios, AI can make a better world for us (or for whatever replaces us).
The alternative is for our monkey brains to keep ramming each other with cars, slamming projectiles into ships to draw political attention, getting angry at people with the wrong skin color, etc
Right. Skynet and Terminator are science fiction, but the slippery, unpredictable reality of how computers actually behave is right in front of your eyes as a programmer every day. Sometimes I wonder if science fiction writers do more harm than good: once they make a movie about some possible future, people feel free to dismiss it as "just science fiction", even if they have easily available empirical evidence that something vaguely like the scenarios described actually kinda has the potential to occur.
I hate the term science fiction, because it encompasses pretty serious science based studies of possible futures (like the book Hail Mary or Aldous Huxley's Brave New World) with complete star-wars-like nonsense, which make the average person think of sci fi as teenager nonsense.
Similarly, here, scifi oversimplifies the situation quite a bit, anthropomorphizing a machine's intelligence, assuming that an intelligent machine would be intelligent in the same way a human would be, in an equally spread out way as a human would, and would have goals & rebel in a similar way a human would
Yes, literally science fiction. That’s where this topic comes from.
> In 1993 Vernor Vinge drew on computer science and his fellow science-fiction writers…
But I don’t think it’s worth completely writing off as if we couldn’t possibly know anything. Progress outside of time would resolve the issue I resolved to. I’m not saying it could actually be done or doesn’t cause a million other issues. It’s an idea for a solution, but it’s completely unrealistic, so you shelve it.
Alternately, though this may be stretching the point, you could say that '60s AI-as-human was at the _end_ of its curve, shortly to be replaced by the more practical actual AI. While specific writers have different biases--and some end up way ahead of the curve--in the aggregate I think futurists tend to overestimate the technologies of their era.
Though you're definitely right that literary biases play into it. While not not all SF writers are futurists, most futurists are trying to tell stories. Even some who aren't primarily SF writers: Kurzweil has a narrative arc to his ideas.
On the contrary, science fiction is an excellent resource for policy, as it imagines futures where certain policies are carried out to their final conclusions.
It disturbs me that all the examples so far are science fiction. If people are inferring real-world capabilities from SF movies then we're totally screwed!
Yup. It's not clear even we know what point we're at, so I think authors can be forgiven for a) not imagining this particular scenario, and b) not thinking "people get overly excited about one more small step in computer skills" was much of a realm for stories.
Science fiction isn't about technology. It just uses technology as a way of telling stories about characters that provide entertainment and insight to the author's contemporaries. Given that the current generation of ML is in practice mainly allowing modest increases in automation, I don't think it's generating many interesting stories in the real world, and it's not clear it ever will.
If I were to look at current stories that science fiction perhaps missed, top of my list would be how the rise of the computers and the internet, meant to create a utopia of understanding, instead enabled a) the creation of low-cost, low-standards "reality" TV, b) allowed previously distributed groups of white supremacists and other violent reactionaries to link up and propagandize in unprecedented ways, and c) let a former game show host from group A whip up people from group B into sacking the US Capitol, ending the US's 200+ year history of peaceful transfers of power. That's a story!
But that would be asking too much of science fiction. Its job isn't to predict the future. Its job, if it has one beyond entertainment, is to get us to think about the present. Classics like Frankenstein and Brave New World and Fahrenheit 451 have been doing that for generations. That's not because they correctly predicted particular technological and social futures.
I think it's hard for most people to tell apart cheap nonsense sci-fi for one in which the vision is plausible. E.g. there are many reasons to be concerned about human-level AI, but Terminator robots is not one of them.
Personally, the best approach I've found so far is to assume that everything that is technologically and economically feasible will be attempted by someone, no matter how perverted or evil it is. If you can imagine something as a plausible consequence of the science/technology we have, and if you can imagine someone plausibly making lots of money from it, then it's the right time to talk regulations.
Fiction plays a fundamental role in shaping the direction of our efforts.
For instance, Jules Verne's "From the Earth to the Moon" planted the seed for the Apollo Program. As Asimov's work on the laws of robotics planted a seed to inspire future AI development.
Edit: To better make my point, "Robert Manning used to make cardboard rockets... Now he makes real ones."[1]
reply