Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

In some cases they were, in other cases they tried their machines. Even when they used the real machines, it's unclear if they had enough blood, skilled operators, time and maintenance to generate accurate results.


sort by: page size:

Didn't they use real lab equipment to conduct real tests? In which case the results would be ok, it just wasn't the tech she promised. Quite a while I read something about the details, so.

It was an obvious farce. If they had real tech they could have simply done a million comparative tests on the military/VA or at free clinics and showed the relative accuracy.

The vast majority of results were not only not accurate, couldn't possibly have been accurate given the methods they were using to conduct the tests. This is well documented. I suggest John Carreyrou's book, _Bad Blood_, which gets into great detail on this.

how would anyone know if their methods worked or they were just making results up then?

There is no way she believed the technology worked. Her employees told her it didn't work. They were still validating the devices by comparing results with working machines at the time the machines were handling customer samples. At best, she might have believed that the technology could work in the future after some technological breakthrough.

"Artificial tests produce artificial results".

I get it. But I admit I'm also on the side of healthy skepticism when it comes to anything around her. Have not followed the trial closely (which is on me), but read a lot of the articles and books and such.

To me it's direct correlation.

"If she knew there might be problems in the lab" - the only problem in the lab was the machines themselves! The blood work is known science - the issue was getting to a point where accurate blood work could be done. Which involved the machines, the machines that they separately demonstrated she knew did not and could not do 'x, y and z'. And to be clear, in your example, x, y and z are "extract sufficient blood for an accurate test", "extract blood in a way that allows for an accurate test (i.e. absent over-dilution, contamination)".

To also be clear, this can be entirely a failing of the prosecution too - but it seems there's this sense of "there's this missing -glue- that we couldn't prove", but the lab IS the machine, and if she knows the machine can't do x, y, z, then it follows that the lab can't do x, y and z. There are only two outcomes from the machine, and lab, an accurate result, or an inaccurate result (or inability to provide any result). And then if you say "My machine cannot do x, y and z", then it cannot provide accurate results (at least not by design, stopped clocks and all that). Then you get to a conclusion that her argument would be that "an inaccurate result doesn't inherently harm a patient", and convoluted logical leaps of "because the result would lead to further testing and interventions", but there was presented at trial concrete evidence that unnecessary, and harmful, interventions were ordered/performed, on the basis of the lab results from those machines.


That doesn't necessarily hold. I think you should keep some confidence that they were not false results, but per this story, they very well could have been.

Something I was wondering is if faking results is so common, then surely these things they are researching must never be used in any application right? If they were, it would quickly be found that it does not actually work...

Their initial results may be correct, but it is unlikely they’ve had the chance to do enough of them to pin down a failure rate with any precision. They also claim that other available COVID-19 tests haven’t undergone enough verification to establish a solid error rate, and if that’s so “on par or better” on the evaluations that have been run is fair as well.

"there were experts in the field who were shouting out about the impossibility of Holmes' claims right from the beginning"

Were there? I'm not doubting your claim, just wondering if anyone really spoke up.

Reading about the challenges they would have had to overcome to back up their claims, there were some serious fundamental problems, and those were challenges the industry had been wrangling with when it came to that type of automated testing for ages.... I always wondered how many of their competitors were thinking:

"No way they overcame all that all at once.... straight up not possible."


From the article (and the company themselves):

"The company said it approaches the development of its tools and reports with scientific rigour, but admits its results are "statistical estimates."

It is very likely that there's a fine print on their procedure that everything is a Guesstimate and while their algorithms will process things the same for every sample, there's just too much variations that are unaccounted for even in such controlled environments.

What would surprise me for example is if someone sent in their DNA for analysis twice on the same company and get varying results. That would mean that the process itself is not well-established.


Are you implying the results are doctored in some way?

Not necessarily. The respondents may have just applied a mechanical procedure and did not do any kind of sanity check.

The only way to make the results believable is if they pre-registered the study based on a proposed mechanism of action, and then validated the results. Otherwise we can never know how many different attempts at crafting signal from the noise were attempted.

It's absolutely dubious. A few of my colleagues use it. They still get out performed. Actually makes their results worse in some cases...

They don't try to reproduce all the results in a paper, only a few "important results", more like a spot check. And it is supposed to be much cheaper since the methods have already been figured out...

Not really, similar techniques were developed as early as.the 70s and there are sufficient doubt that the student project in the news did not go as well as they claimed, nor was there any report of reproducibility in a controlled laboratory setting.

This seems wrong. If they're using code to arrive at their findings, it should be high-quality, no less so than their lab technique. One can lead to bogus results just as easily as the other.
next

Legal | privacy