Whatever model the AI is fed is going to contain an enormous number of assumptions many of which we have a very limited ability to scientifically test and validate prior to using them as inputs.
If the inputs are faulty if doesn’t matter how smart the AI is it’s going to produce inaccurate results.
Can any AI be trusted outside of it's realm of data? I mean it is only a product of the data it takes in. Plus it isn't really finger quotes AI. It just a large data library with some neat query language where it tries to assemble the best information not by choice but probability.
Real AI makes choices not on probability but in accordance of self preservation, emotions and experience. It would also have the ability to re-evaluate information and the above.
There arguably is for normal people (insert extremely complicated decision theory here) but for AIs it only follows to the extent they need help from humans, and they can't coerce and trick them instead.
"AI" is not currently autonomous; its algorithms that do exactly what their creators tell them to do. They run on binary computers that only do exactly as they are told.
Train an AI which is judged on its ability to train humans. I don't think we're ever going to really trust AI until it can explain itself clearly to humans. And that's pretty close to being able to teach.
AI works the way human intuition does. Try explaining professional intuition to someone; it's not going to be a convincing argument unless they're willing to trust your expert gut feeling.
This is like asking, IMO, "how can humans be trusted to pilot planes, when my barista can't ride a bike?"
reply