PainSci Member Login
Submit your email to unlock this audio content (and any other stuff for members). If you can’t remember or access your registration email, please contact me. ~ Paul Ingraham, PainSci Publisher
Found! 🙂 Member content on this page only has just been unlocked. To unlock member content on all other pages for a month, see the confirmation email just sent. (If it doesn’t turn up in your inbox, check your spam folder! Email can also sometimes take a few minutes. If it never turns up, just contact me.)
Found! But… You have a PainSci account, but this content cannot be unlocked, because you do not have an active PainSci membership with perks for that. You may have have a basic membership and/or access to books only. For more information, see your account page or confirmation email (just sent).
Not found! 🙁 Sorry, but that email address is not in the PainSci database.
⚠️ Sorry, too many lookups right now. This is a rare error. It should go away if you try again in a little bit.
Privacy & Security of this form This login is private and secure: the information you submit is encrypted, used only to search for matching accounts, and then discarded.
We have a common rationalization for treatments with possible small benefits shown by “promising” studies: every little bit counts! Anything is better than nothing! This gambit is usually deployed for serious athletes, the canonical example of highly motivated patients — but of course it’s true for all kinds of patients, anyone for whom the stakes are high.
An amazing amount of physical therapy and pain treatment is justified with this logic: the evidence shows only a small benefit, but that’s okay, because “athletes want every possible advantage.” Right?
Sure… if the intervention is actually effective. (And if the investment is also small. And if the risk of harm is trivial.)
But what if it’s not actually beneficial? Not even a little bit?
The “effect size” shown by a positive clinical trial is the magnitude of the treatment benefit. For instance, if a study shows that celery juice reduced pain by an average of 5 points on a scale of 10 … that would be a ginormous effect size. In sports and pain medicine, we’re lucky to see a 2-point improvement.
The meaning of small treatment effect sizes in research is routinely misunderstood. A small effect size does not mean that it was “better than nothing.”
What it actually means is “likely just an artifact.” It’s probably not real.
The true meaning of a small effect size
The only reason there’s any positive signal at all is usually the result of bias-powered jiggery pokery in the experimental design and number-crunching — known as “p-hacking.” If truly objective and talented researchers did the same trial, they would probably find no result whatsoever.
They wouldn’t find a small effect. Just no effect at all. Which isn’t so promising
Could there be a “small but real” effect that is actually better than nothing?
Yes: that is of course always possible (although quite implausible in some cases).
But can we have reasonable confidence in that? Should we waste time, money, effort, and hope on a benefit that isn’t just small, but small at best, and more likely simply non-existent? Not hardly. It’s quite a bit more likely that the evidence suggests there is no effect … not a small-but-real one.
And that goes double for things that weren’t plausible to begin with. The chances of “statistical jiggery pokery” go way up in studies of silly things: see “How to prove that your therapy is effective, even when it is not: a guideline”.