Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

The first large peer-reviewed clinical trial published in NEJM in Dec 2020 had around 43,000 subjects and in that population there was about a 95% reduction in cases vs placebo. Over time, it is clearer that the effectiveness with respect to the all-cases end-point is less. But that appears to be less a function of lying and more a function of the incremental way in which biomedical science asymptotically approaches the truth through iterative study and better understanding of confounding variables etc.


sort by: page size:

It's a highly formalized clinical trial, it would be nonsensical to misrepresent or exaggerate the results. The preliminary analysis is based on 94 cases; not a huge number, but totally reasonable to draw conclusions from.

People could say trials are exceptions that don't generalize. Now that everybody's tried it successfully at scale for long enough to eliminate short-term effects, the excuses are discredited.

Actually it's almost impossible to rig the statistics. And every drug on the market has shown efficacy compared to placebo in large tests calle phase 3 clinical trials.

You can’t show that it has zero effect without indefinite numbers of patients enrolled in the study. It hasn’t shown any dramatic improvements in the studies that are there. The rest is a tradeoff between enrolling even more patients into studies to prove small improvements (or not) or to move on and spend resources on finding things that do have a major impact.

By all measures - population size, number of studies, number of meta-studies, number of phase 1/2 studies. Number of doctors, number of skeptics. Number of papers.

You can quantify, perhaps by simulation, the amount that it's less solid.

What's balancing against that is continuing the trial means giving patients a placebo when you have evidence of a better treatment.

It's a really complex issue both scientifically and ethically.


Doesn't giving the drug to everyone in the trial at least add some statistical information (i.e. what is the likelyhood on it working for everyone vs. it being a spurious event)?

tl;dr it definitely helps, we're not sure how much, a proper large-scale trial is needed.

And those were later validated by evidence from clinical trials. Until clinical trials involving significant numbers of patients are available, there is not evidence according to the standards of medical science. That's the contradiction.

What blows my mind about these clinical trials is that the truth is always going to come out if the drug goes to market. And for big pharma, that's the goal--get the drug to market (and the largest market possible).

While you can hide bad results when n is small. i.e. just re-run the trial with another or smaller n. However, when n gets large as the drug comes to market, there is nowhere to hide. The negative effects will become obvious and compelling as the sample size grows.

How do they think they can hide this?


The lying seems like the problem there. Present the patient with accurate records of previous trials and outcomes, let them make an informed decision (with help from their doctor, if they're not scientifically literate) and I don't see a problem.

I thought the trials only confirmed a reduction in symptomatic incidence?

The timeframe of two years is simply not enough to be entirely sure, even with the large sample size. It is unlikely to have side effects, but the demand on proof is extensive and that isn't necessarily bad.

The way to convince the disbelievers is with large randomized controlled trials.

The way I read this, the first two trials did not bring enough evidence towards the drug working, but they did provide some evidence. In the end, all three experiments taken together provide pretty massive evidence that the drug actually works, way below 0.05.

Of course, this all comes crashing down if the first two experiments happen to provide contrary evidence (that is, evidence the drug does not work). This would cancel out the results of the final trial somewhat, and not taking this into account is clearly cheating by publication bias.


That would be a strange rumour hear.

What I'm saying is, if some rando on HN can see see that the trial size is too small to be statistically significant how did it pass peer review?


With the risk of starting a flame war, we recently had several well-publicised clinical trials that reported 95% efficacy of some certain modality. Yet, in reality, efficacy as defined in the trial turned out to be closer to 0%.

Instead of investigating what in the design and execution of the trial led to such a discrepancy, the problem was handled by denying there was a problem, changing the goalposts, reporting ad-hoc hypotheses as facts, silencing all critics, and forcing the public to take the modality anyway or lose jobs, school, and freedom of movement.


nobody is saying that more trials are unnecessary. my point is to emphasize the limited data so far is compelling, and that it goes beyond a 'trial of 1'.

I disagree that ending trials early is solid science, or at least not _as_ solid as running them to completion. It makes it hard to know whether results are due to a treatment effect or just random fluctuations over time. Essentially, it allows for cherry-picking of significant results in a way that bypasses family-wise error correction.
next

Legal | privacy