Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

I think something like a million results were voided as they’re were deemed unreliable. I don’t know if that put anyone at risk though


sort by: page size:

Considering how many people were tested a bunch of those could just be false positives. They did like 100m tests or something crazy

I'd imagine the rate of false positives was also quite high.

In that case it's extremely likely that the test result was wrong, no?

Based on what little information is available so far it sounds like they did not have an acceptable false positive rate. I'm wondering if the secrecy allowed them to hide the underlying statistics, maybe not even intentionally.

This is so stupid and a huge step backward from the consumer protection advances of the 20th century.


You're assuming that they consider the false positives to be disastrous (and that they consider them to be false positives in the first place). That's not necessarily a safe assumption.

Sounds to me like the data was wrong , which tests would not catch.

Did they fail that chance, or false positive that chance?

It would certainly run the risk of false positives.

This. The false positive rate is unknown and therefore the data could be highly misleading.

I’m pretty sure bad outcomes out number good ones. Were there bad outcomes because of an incompetent provider or a test with high false positives? I’m pretty sure it’s the former.

This is good to know! I'm still genuinely curious how many false positives this leads to.

Depending on the test used, there might be >50% false negatives in there.

Can you point to a single instance of that happening where it was due to a false positive?

It was 11 out of 100 people, using an antibody test that's known to have some false positives. So the error bars are huge. (The fact that this result has metastasized into obvious nonsense is exactly why people should be careful publishing incomplete scientific results.)

I wonder how many false positives there are.

The linked article is the source. The analysis in the linked article placed the specificity within a range of "they could all be false positives" to "they might mostly be legitimate results".

I wouldn’t put much weight on this. They looked at about 15 stored samples (actually 80 depending how you count) and got one positive. As far as I can tell that forms the entire factual evidence. Contamination/false positive works out to be a large concern. Clearly they wanted the result as well.

There were five alerts generated due to a positive match, and five of them were false positives. We have no idea how many of the ~500 individuals on the watch list were scanned and missed, and we have no idea how many faces were scanned, but it's still a 100 percent failure rate.

Getting a false positive result because you screwed your significance test because you decided to stop early? When you release it to the public, people think they're protected, so many more people than X die.
next

Legal | privacy