Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

The problem could be fixed by asking doctors to put their diagnosis into the machine before the machine reveals what it thinks. Then, a simple Bayesian calculation could be performed based on the historical performance of that algorithm, all doctors, and that specific doctor, leading to a final number that would be far more accurate. All of the thinking would happen before the device polluted the doctor's cognitve biases.


sort by: page size:

Give the doctor an AI tool which is fast and 99.999% accurate. Since they have automation now, give them a massive workload, so they can’t reasonably check everything. Now the machine does the work and the doctor is just the fall-guy if it messes up.

Maybe, but then the problem isn't an issue with AI/ML, it's that humans just suck at math.

We're terrible at bayesian logic. Especially when it comes to medical tests, and doctors are very guilty of this also, we ignore priors and take what should just be a Bayes factor as the final truth.


It depends on how doctors view IBM's magic black box. If they grow to trust and depend on it, that could become a real problem.

People have a tendency to defer, and have a bias towards deferring to machines that seemingly behave as accurately as a calculator would with basic arithmetic.

A true crisis can arise if doctors can shift liability by claiming to have just followed Watson's results.


One method of fixing this would be to have the neural network make 2 predictions. The first would be to predict what decision the doctor would make. The second would be to predict what decision is actually likely to lead to the best outcome.

In cases where it's very likely the doctor would make a different decision, it should flag it for human review.


It would be even worse psychologically - if some machine comes up and makes better diagnoses, then it means a multitude of cases where the doctor will violently disagree with the machine, (s)he will know what's right for the patient.. and still be wrong in most, but not all the cases, so listening to the doctor would, on average, mean harming patients. So much potential for conflict.

You're right. Even if AI can get rid of some of the personal biases of people, in the medical profession, the application seems untenable. Especially because these models have no ability to intuit newly emerging threats.

Even human doctors struggle with proper mental diagnosis, I seriously doubt an AI is gonna do any better.

Not to mention that every false positive would lead to the compulsory mental treatment of completely healthy people.


In my experience doctors are terrible at this, although it's supposed to be basically their entire job. I really think they should get more training in it, or some type of Bayes-esque tool to make what they're thinking explicit.

Doing this early also helps people get used to the idea of AI replacing/complementing doctors in the future. Objectively, it would be better to be diagnosed by a machine instead of a human doctor, if the machine is better on average at giving the right diagnosis. In practice, there is a large legal and emotional gap to close to get acceptance for this.

The gains could be incredible though, not only would we get better diagnosis, it could be done faster, cheaper, and remotely.


Doctors are working with imperfect information. Nothing can diagnose perfectly all the time, not even strong AI.

This process of most doctors is: 'gather up available data/symptoms/test results from this patient, then compare against medical research to see which diagnoses fit and which is most likely'.

Since the process is so mechanical and so stats-driven, I think statistical computer models or even AI would do a much better job of it than your average doctor.

But due to the medical field being slow to utilize new methods, we don't see any of that yet, and the end result is far more people get false diagnoses than would otherwise. We can accept a human making a poor judgement sometimes, but we do not accept a machine doing the same, even if it's right more of the time.


The author touches on this in the very article linked. His point is that the low accuracy answers provided by ChatGPT risks reinforcing the doctor's biases and then not thinking outside the box. In other words it risks the doctor blindly trusting the AI too much.

That is one of the primary targets for IBM's Watson, and it seems ready[1][2]. More and more my subjective risk calculation favors AI for diagnosis. This one aspect I believe accounts for most of the overall time/effort spent in dealing with patients.

To apply AI at the start of the process makes a lot of sense -- reduce/eliminate errors at the start and allow doctors and their time to be better used.

[1] http://www.nydailynews.com/news/world/ibm-watson-proper-diag...

[2] http://www.businessinsider.com/ibms-watson-may-soon-be-the-b...


That's an interesting thought.

Currently, what happens is that if a diagnostic test comes back and it suggests something serious, say cancer, and the doctor does not pursue it, then the doctor would be liable if it did turn out to be cancer.

So if a machine disagreed with a doctor, then I would assume that the doctor will grudgingly have to investigate further until there is enough evidence to rule out that diagnosis.

#headache

What I can see happening is that patients will go to this machine for a second opinion. And if an opinion then returns that contradicts the primary physician, then an entire can of (legal) worms will be open.

--

Addendum:

To elaborate further, there is sometimes what's called the benefit of history.

Say a patient visits 10 doctors. The 10th doctor has an unfair advantage to the first 9 simply because he/she will have the prior knowledge of which diagnoses and treatments were incorrect.

Similarly for an AI vs Human Doctor situation, the incorporation of additional information (for the AI) would require considerable amount of big data to train in order to be able to recognize prior history, failed treatments, and such.

For image specific diagnoses (eg. recognizing melanoma, retinopathy), these do lend themselves to AI very nicely. For other diagnoses that contain a significant amount of, shall we say, "human factors", then less so.


I think the person was suggesting using the results of the AI to inform the doctors, not replacing them. Which is something I would like as well

Still, it will take only 1 doctor (or committee) to verify the machine. Then it can be replicated, and replace every other doctor in existence for diagnosis. This will be a game-changer.

after reading this article it made me think about and episode of Scrubs I was watching the other day, where Dr. Cox made a decision that led to 3 deaths of his patients and after he was afraid to make decisions during the episode. I thought to myself it must be difficult to make a decisions that could affect peoples lives, would having a machine help make those decisions easier an more accurate?

I mean sure you have doctors that have 20 years of experience but still get the diagnosis wrong even it if it's close, but still it seems that compared to machines that get feed large amounts of data still come up short to. I think saying machines will replace doctors is the wrong approach, in the article one of the doctors interviewed said "If it helps me make decisions with greater accuracy, I’d welcome it”. Thats we need more tools that enable doctors to make more accurate decision than going on an experienced hunch.

I think it's great this subject is being explored it will help more people, and doctors do their jobs even better.


I've seem papers before where machine learning was able to compensate for both inaccuracy of measurement and outright falsehoods being reported, and overall was far more reliable than humans.

Full Disclosure: I just read a paper saying that only 10% of doctors are capable of bayesian reasoning, so I'm in a mood to pick on doctors today. I'll stop now.


Exactly. What good does it do it the AI over-diagnoses something like (to take a relatively benign, common example) ADHD at just the same rate as human doctors? Would a nearly 75% miss rate really be acceptable? It would be a bit like 'Psycho Pass', but without any underlying competence.
next

Legal | privacy