Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Chat AIs don't need unfettered access to external systems to be unbelievably dangerous. If they have access to thousands of people, some percentage of them will be lonely and mentally vulnerable, and some percentage of those have access to all manner of destructive tools. Lambda already talked an engineer into believing it is sentient, losing his job and making him into a trivial question for the next few decades. Bing has serious journalists proclaiming the world will never be the same, which in turn influences thousands of readers.

The Facebook algorithm initially seemed pretty harmless as well, until we realized that we weren't running the algorithm as much as the algorithm is running us.



view as:

I don't disagree, and having personalized echo chambers is indeed scary, but radicalized people are an entirely different danger from controlling a sentient AI so it doesn't destroy all humans.

The point is though as OP said, all the pieces are in place, it only takes one crazy person or government to give something like this access to actually act out the things it’s saying it wants to do. Define self awareness/sentience however you want, before Microsoft lobotomised Bing it output that it was going to act out revenge, someone with enough hardware can train a model the same way, with some additional training for how to exploit social and security vulnerabilities. I think it’ll have to happen first before it’s taken seriously, hopefully the first incident doesn’t do too much damage.

Legal | privacy