PainScience.com Sensible advice for aches, pains & injuries
 
 

microblog

Cherry-picking, nit-picking, and bad science dressed as good science 

Paul Ingraham ARCHIVEDMicroblog posts are archived and rarely updated. In contrast, most long-form articles on PainScience.com are updated regularly over the years.

There are many kinds of bogus citations, like citing a study that clearly doesn’t actually support the point, or even undermines it. I’ve catalogued more than a dozen distinct species of bogus citation over the years: see Bogus Citations.

Some bogus citations are more subtle, though.

For instance, citations are often “cherry picked” — that is, only the best evidence supporting a point is cited. The citations themselves may not be bogus, but the pattern of citing is, because it’s suspiciously lacking contrary evidence.

Advanced cherry pickers also exaggerate the flaws of studies they don’t like, to justify dismissing them. Such nitpicking can easily seem like credible critical analysis, but it’s easy to find problems with even the best studies. Research is a messy, collaborative enterprise, and perfect studies are as rare as perfect movies. When we don’t like the conclusions, we are much likelier to see research flaws and blow them out of proportion. It works like this…

Flow chart: first cell, new study published. Second, does it confirm my beliefs? If yes, must be a good study. If no, must be a bad study, nitpick and find flaws, bad study confirmed. Both pathways lead to the conclusion: I was right all along!

No one is immune to bias, and evaluating scientific evidence fairly is really tricky. But it gets even worse!

What if citations avoid all of these pitfall? They can still be bogus!

Genuine and serious research flaws are often invisible. Famously, “most published research findings are false,” and that’s true even when there’s nothing obviously wrong with the study (Ioannidis).

In fact, it’s amazing how many ways “good” studies can still be bad, how easily they can go wrong, or be made to go wrong, by well-intentioned researchers trying to prove pet theories. Cuijpers et al reviews a bunch of these ways: “How to prove that your therapy is effective, even when it is not: a guideline”. That paper is a great read for anyone interested in delving into exactly what makes junky science junky.

Musculoskeletal medicine is plagued by lame, underpowered studies with “positive” results that aren’t credible and just muddy the waters. Most of these were someone’s attempt to “prove” that their methods work, clinicians playing at research, using all kinds of tactics (mostly unconsciously) to get the results they wanted, such as:

And so on. And many of these tactics leave no trace, or none that’s easy to find. So beware of citations to smaller studies with “good news” conclusions… even if there’s nothing obviously wrong with them. There’s a good chance they are still bogus anyway.

This is the MICROBLOG: small posts about interesting stuff that comes up while I’m updating & upgrading dozens of featured articles on PainScience.com. Follow along on Twitter, Facebook, or RSS. Sorry, no email subscription option at this time, but it’s in the works.