Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Yes, and when humans make bad guesses it's often seen as funny or nothing out of ordinary. When AI makes bad guesses, it will be seen as a failure of some standard, but with very few people understanding how to fix it. I'm not sure how "allowable" mistakes in the interest of AI learning will be tolerated for AI services used for real-world purposes.

"This Bot is only 6 months old, give him a break". But will people give the Bot a break? Either way, blaming AI will be a popular way to pass the buck.



sort by: page size:

Intelligent people make mistakes, sure. But AI has the potential to mislead at greater scales. And I really don't like when people try to trivialize this issue. Not all incorrect AI answers are/will be "obviously absurd." Some may be incorrect at a more subtle level. It might be so subtle as to require domain expertise to correct. Are you telling me that this isn't going to happen?

People carry so much confidence about this technology. But just a few weeks ago, Google was telling me to drink urine to fix my kidney stones. And this was deemed to be mature enough to put into production? Oh, but what you're saying is that it will get the harder questions correct?


Ai telling you to eat rocks might be funny. Ai telling you to mix bleach and vinegar is not.

With reach any type of an authoritative answer will find enough people that take it at face value.

Computer mistakes have been costly in the past (excel and Greece?), AI encourages lack of scrutiny despite all the lip service given to mark the answers as “potentially” wrong. It’s not too hard to imagine a future where AI will be used to make decisions that no one will try to understand or verify.


Not necessarily. You overestimate what people try to pass off as AI.

Where scale really comes into play and gets scary is when bots can vote on other bots. Right now, most wrong human answers are pretty rapidly downvoted or corrected. But the supply of humans to make incorrect posts is limited and relatively balanced by the supply of humans to downvote them. Subtle errors in AI posts could become so widespread that it's impossible to counter them effectively.

That said with enough training and training data, the line between "plausible sounding" and "accurate" gets thinner and thinner. This will be especially true as these AI models refine their results based on user interactions. Being right for the wrong reasons becomes less and less relevant the higher the accuracy goes up, and at a certain point, it might get so good that no one cares.

Maybe human intelligence is more like that than we're willing to admit ;)


I'm afraid people are going to start posting information from all the AI tools without human checks for accuracy and pretend they're useful in discussions.

I think of the story from Lem’s Cyberiad about the 8 story high ‘thinking machine’ that insisted 2+2=5, got angry like the Bing chatbot when it was told it was wrong, then broke loose from its foundations and chased Trurl and Klapucius into the mountains.

People will learn the hard way there is no market for machines that get the wrong answers. There are plenty of places where people will accept one kind of imperfection or another and that’s fine, but when it comes to an accounting bot that screws up your taxes it is not fine.

(Funny I have seen a few chatbots that claim they are busy as soon as you tell them they are wrong about something and I wonder if that is because they’ve been trained on many examples where things really went south when somebody called out somebody’s mistake.)


Yes. This is applicable to most automation, machine learning and AI. These technologies are generally really bad at sense checking their results.

At the end of the day they match patterns that are mathematically the “most correct” pattern even if it’s obviously not the correct answer. This is the Achilles heel of these technologies that is very hard to overcome and why any real applications generally still have a human in the loop.

See Facebook’s “chatbot” experiment where they called it off after realizing the only way to have it work in practice was to have an army of humans behind the scenes sense checking answers. The grand AI engine that takes over for humans is still a pipe dream for most applications.

Even all these “neutrality flags” are generally nothing more than a keyword search. Put words like COVID-19 or Coronavirus in your post and Medium puts a banner at the top saying the article hasn’t been fact checked.


People can be wrong and can sound quite plausible too.

The key is to verify... and that's true for AI and people too, though for sure that's not something people are used to do sadly.


A better question is: can you tell when a person or a computer is wrong in the first place...

Developing your own intuition for the world will help you better filter bad gossip and bad ai... ultimately to me if the AI is correct then it's welcome.


Totally, maybe we’re in the uncanny valley of AI intelligence and people are now over-reacting to small inaccuracies.

I totally agree, but we're going to need to figure out better ways to verify correctness of what the AI tells us. This will be pretty hard since we can't hardly do that with human-produced stuff either. And further if the AI is trained on human-produced stuff that contains lots of errors... I don't really know how we're going to do it.

I have been using AI generated surveys using the playground and have found them quite effective in simulating responses. In fact they are incredibly similar to my experience asking the same questions IRL. The challenge is people don’t trust them and AI still have this negative association. So yes I mean to say it’s yet another human error.

These things are capable of being wrong and malicious in every way that a human can. To me it doesn’t really seem like news. A human is capable of creating a dataset to support a false conclusion, are we really surprised that an AI model can too? Do we need AI to be 100% perfect before people are allowed to start using it?

It seems easy to create a list of everything bad a human can do and then write a corresponding article about how bad it is that an AI can do it too.


From the comments it sounds like Duolingo is still retaining some contractors to verify AI correctness.

However I’m more concerned with the next 5 years. As AI inevitably gets more integrated in products and services, how will people know when AI is wrong?

My biggest fear is that AI will make us all dumber because people assume the computer is right. I see this already with some end users of the software I write.

I know my programs have bugs, sometimes they are wrong. But the average user believes it to be true because it is on the screen. For some reason that is an authority.

Maybe companies who use AI in their producers should be legally required to retain subject matter experts to verify correctness. Maybe AI should be banned from processing queries about health, or other sensitive fields?


Will AI ever do any better than to give the average advice? They suffer from a regression to the mean, giving average advice for the average person.

So how will they 'fix' it? Put in a hack to recognize this query and give some boilerplate answer?

I'd be more concerned if the person receiving the skewed advice didn't recognize it as wrong.

Anyway if this gets an AI taken down then they'll all come down. They all suffer from this.


AI tends towards the logical truth, so probably not.

I think you overestimate people’s ability to sniff out bad data on the internet.

Also are you suggesting people fact check an AI by asking it if it is correct? That seems absurd.


I think the problem is that AI is wrong so often that it would be foolish for it to stick to its guns when you're correcting it.

Bing Chat did this at first, and started calling the user a manipulator and an abuser when you corrected it too much (regardless of whether the previous message made any sense or not). I found it really funny, but other people were distressed by it because I suppose they thought they were talking to something sentient.

next

Legal | privacy