Detailed guides to painful problems, treatments & more

Every little bit counts! Unless it’s not ACTUALLY a little bit…

 •  • by Paul Ingraham
Get posts in your inbox:
Weekly nuggets of pain science news and insight, usually 100-300 words, with the occasional longer post. The blog is the “director’s commentary” on the core content of PainScience.com: a library of major articles and books about common painful problems and popular treatments. See the blog archives or updates for the whole site.

We have a common rationalization for treatments with possible small benefits shown by “promising” studies: every little bit counts! Anything is better than nothing! This gambit is usually deployed for serious athletes, the canonical example of highly motivated patients — but of course it’s true for all kinds of patients, anyone for whom the stakes are high.

An amazing amount of physical therapy and pain treatment is justified with this logic: the evidence shows only a small benefit, but that’s okay, because “athletes want every possible advantage.” Right?

Sure… if the intervention is actually effective. (And if the investment is also small. And if the risk of harm is trivial.)

But what if it’s not actually beneficial? Not even a little bit?

Effect sizes

The “effect size” shown by a positive clinical trial is the magnitude of the treatment benefit. For instance, if a study shows that celery juice reduced pain by an average of 5 points on a scale of 10 … that would be a ginormous effect size. In sports and pain medicine, we’re lucky to see a 2-point improvement.

The meaning of small treatment effect sizes in research is routinely misunderstood. A small effect size does not mean that it was “better than nothing.”

What it actually means is “likely just an artifact.” It’s probably not real.

The true meaning of a small effect size

The only reason there’s any positive signal at all is usually the result of bias-powered jiggery pokery in the experimental design and number-crunching — known as “p-hacking.” If truly objective and talented researchers did the same trial, they would probably find no result whatsoever.

They wouldn’t find a small effect. Just no effect at all. Which isn’t so promising

Could there be a “small but real” effect that is actually better than nothing?

Yes: that is of course always possible (although quite implausible in some cases).

But can we have reasonable confidence in that? Should we waste time, money, effort, and hope on a benefit that isn’t just small, but small at best, and more likely simply non-existent? Not hardly. It’s quite a bit more likely that the evidence suggests there is no effect … not a small-but-real one.

And that goes double for things that weren’t plausible to begin with. The chances of “statistical jiggery pokery” go way up in studies of silly things: see “How to prove that your therapy is effective, even when it is not: a guideline”.

PainSci Member Login » Submit your email to unlock member content. If you can’t remember/access your registration email, please contact me. ~ Paul Ingraham, PainSci Publisher