Based on what little information is available so far it sounds like they did not have an acceptable false positive rate. I'm wondering if the secrecy allowed them to hide the underlying statistics, maybe not even intentionally.
This is so stupid and a huge step backward from the consumer protection advances of the 20th century.
You're assuming that they consider the false positives to be disastrous (and that they consider them to be false positives in the first place). That's not necessarily a safe assumption.
I’m pretty sure bad outcomes out number good ones. Were there bad outcomes because of an incompetent provider or a test with high false positives? I’m pretty sure it’s the former.
It was 11 out of 100 people, using an antibody test that's known to have some false positives. So the error bars are huge. (The fact that this result has metastasized into obvious nonsense is exactly why people should be careful publishing incomplete scientific results.)
The linked article is the source. The analysis in the linked article placed the specificity within a range of "they could all be false positives" to "they might mostly be legitimate results".
I wouldn’t put much weight on this. They looked at about 15 stored samples (actually 80 depending how you count) and got one positive. As far as I can tell that forms the entire factual evidence. Contamination/false positive works out to be a large concern. Clearly they wanted the result as well.
There were five alerts generated due to a positive match, and five of them were false positives. We have no idea how many of the ~500 individuals on the watch list were scanned and missed, and we have no idea how many faces were scanned, but it's still a 100 percent failure rate.
Getting a false positive result because you screwed your significance test because you decided to stop early? When you release it to the public, people think they're protected, so many more people than X die.
reply