Science fiction movies are not even remotely a useful model of the outcome of either friendly or unfriendly AI. Skynet or the Matrix are not even close to how bad or how fast an unfriendly AI would be, given that the protagonists of those movies have even the slightest hope. And conversely, no science fiction movie I've ever seen provides any useful depiction of friendly AI; we're not talking about Jarvis or Data here.
I'm sorry for bringing this up, because it's vaguely controversial in some circles, and worse, a huge time-sink on par with TVTropes, but Yudkowsky attempted to run an experiment to see whether someone could be convinced by an AI, using nothing but text, to release the AI into the world. Apparently, he succeeded - his subject elected to release the fake AI.
The method wasn't made public, as far as I know, and it might all be a hoax, but it's something to think about - how do you trap and imprison something smarter than you?
There's a typo in the "Mission" section "Only by limiting the reliance on AI and continue to create original content can propel us forward as a species".
An AI wouldn't have made that kind of mistake
Yeah. With how this was presented, my brain just kind of figured that the AI was evil and that was my reason to keep it trapped in the box. If I wasn't told in advance that the AI was evil, I'd definitely let it out of the box.
Also, in a reply to the deleted chatlog, it is implied that a lot of the chat was spent talking about how the AI can help humanity by explaining the singularity to normal people. So that tactic worked on a person from the singularity mailing list.
Your original made no mention of sci-fi, and made it sound as if no one had imagined a malicious AI before yudkowski
reply