Passing 95% Confidence Criteria Doesn’t Make You 95%

An extremely serious error made even by “experts”, that a few people are trying to explain to the world. If a drug on trial beats the 95% criteria, it means the older drug or placebo tested against it did so much worse that there is only a 5% probability that the trial would come out this way if in fact the tested drug was no better. But that is NOT AT ALL the same thing as saying that there is only a 5% chance that the tested drug doesn’t work. On average its chances of working are much worse than 95%.

Here is an obvious example: Someone tells you that if you paint your pennies purple, they will come up heads much more often than tails when you flip them. So, you conduct a “trial” by painting ten coins and flipping them. You get eight heads. If the coins are fair, this will only happen about 4.5% of the time. So according to the definition of statistically significant results, this qualifies. And in fact, this experiment would indeed be evidence that purple painting produces extra heads. But it sure doesn’t mean that there is a 95.5% that such a way-out theory would be true. In fact, common sense says the chances that the theory is true is under one percent. All the experiment did was up your opinion of its chances compared to your opinion before the trial (which I would hope was below one in a thousand). Likewise, when a new drug or medical procedure is put on trial, a result that barely sneaks by the 95% criteria is not going to be better than the older drug (or a placebo if that was what it was tested against) anywhere near 95%. The true chances will depend on how plausible it was that it works better than its competition before you did the trial. Those chances will be a lot higher than the purple penny theory to be sure. But it might very well be below 50%