I read that the false positive rate is extremely low (below 1%).
False positives are not a huge problem when you are only testing people with symptoms. False positives would only become a problem when you are doing a large number of tests on people who have no symptoms (e.g. 1% of 1 million people when close to none have the virus).
I think your numbers are the detection rate: false negative rate is high (up to 40% for early tests).
It's important to remember that the positivity rate is a measure of how much testing you're doing, not a measure of how much virus is around. Unless you're reading articles about the politics of rapid tests or the like, you should mostly just ignore the pos rate, it doesn't tell you anything useful otherwise.
A 50% false positive rate means that, since most people don't have the virus, about 50% of all the tests will come back positive. You just about might as well flip a coin instead.
If you test 1000 people with a test which has a 1% false positive rate (lets ignore false negatives), but only one person in the 1000 is really infected you get about 11 positive tests results of which 10 are wrong.
Tests like this have false positive rates that are significant. The test may be a useful tool when we already reasonably expect someone to test positive (showing symptoms plus contact with a known case or travel from an affected area).
However, if you have no reason to believe you are infected, a false positive might easily be more likely than a true positive. And in fact, even a very modest false positive rate would still make it orders of magnitude more likely than a true positive.
Consider that even if you assume the worst about the state of the virus in the US, it's probably a few thousand people. If you randomly tested every person in the US, and the test had a false positive rate of only 1%, you would have thousands of times more positive results than real cases. Someone else in this thread mentioned an existing test that might have a 40% false positive rate.
Okay - I'm going to reveal my statistics ignorance here - but if they did get a false positive, how does 0.0% fall within the range of possibilities (or is that allowing for it might be 0.0499..% or lower?)
At the latest low point in the epidemic, some areas were seeing test positivity rates of under 0.5%. If false positives were a large portion of positives, this would seem to be impossible.
I don't know but the problem is more with low infection rates. Suppose you have 3% false positives and 3% false negatives, and 0.1% of people are infected.
The false negatives will be 3% of 0.1%. The false positives will be 3% of 99.9%. You exaggerate your infection rate by about 30X even though the test is equally inaccurate in both directions.
> unless 100% of the population is administered a test with no risk of false negative or positives.
Actually, the "risk" of false positives or false negatives is dependent on prevalence, 100% is not always required. If you have a test where 5 of 100 cases are false (i.e. 95% times is not wrong), and nobody of these tested is actually infected, you could incorrectly believe that 5% of population is infected even when nobody is, meaning such a conclusion would be completely false.
But if the population is actually already e.g. 50% infected, the same test can "lie" only 5%, giving you 47% or 53% but still being "mostly true" (from the engineering point of view).
So it is important to ignore the test reports as long as they are close to their false positive rate, which they were in a lot of antibody tests done up to now.
Also "false positives" and "false negatives" can lead to wrong handling of the cases, but that's another topic.
What’s more likely, the virus actually has a perfect 1.00 Rt factor, or there’s a systemic error introduced by humans?
reply