Detailed guides to painful problems, treatments & more

Cherry-picking, nit-picking, and bad science dressed as good science 

 •  • by Paul Ingraham
Get posts in your inbox:
Weekly nuggets of pain science news and insight, usually 100-300 words, with the occasional longer post. The blog is the “director’s commentary” on the core content of PainScience.com: a library of major articles and books about common painful problems and popular treatments. See the blog archives or updates for the whole site.

There are many kinds of bogus citations, like citing a study that clearly doesn’t actually support the point, or even undermines it. I’ve catalogued more than a dozen distinct species of bogus citation over the years: see 13 Kinds of Bogus Citations.

Some bogus citations are more subtle, though.

For instance, citations are often “cherry picked” — that is, only the best evidence supporting a point is cited. The citations themselves may not be bogus, but the pattern of citing is, because it’s suspiciously lacking contrary evidence.

Advanced cherry pickers also exaggerate the flaws of studies they don’t like, to justify dismissing them. Such nitpicking can easily seem like credible critical analysis, but it’s easy to find problems with even the best studies. Research is a messy, collaborative enterprise, and perfect studies are as rare as perfect movies. When we don’t like the conclusions, we are much likelier to see research flaws and blow them out of proportion. It works like this…

Flow chart time! I will describe it nicely for you. First cell says: new study published. Second cell: does it confirm my beliefs? If yes… must be a GOOD study. If no… must be a BAD study, so nitpick and find flaws, bad study confirmed! And then both pathways then ultimately lead to the inevitable conclusion: “I was right all along!”

No one is immune to bias, and evaluating scientific evidence fairly is really tricky. But it gets even worse!

What if citations avoid all of these pitfalls? They can still be bogus!

Genuine and serious research flaws are often invisible. Famously, “most published research findings are false,” and that’s true even when there’s nothing obviously wrong with the study (Ioannidis).

In fact, it’s amazing how many ways “good” studies can still be bad, how easily they can go wrong, or be made to go wrong, by well-intentioned researchers trying to prove pet theories. Cuijpers et al reviews a bunch of these ways: “How to prove that your therapy is effective, even when it is not: a guideline”. That paper is a great read for anyone interested in delving into exactly what makes junky science junky.

Musculoskeletal medicine is plagued by lame, underpowered studies with “positive” results that aren’t credible and just muddy the waters. Most of these were someone’s attempt to “prove” that their methods work, clinicians playing at research, using all kinds of tactics (mostly unconsciously) to get the results they wanted, such as:

  • Sell it! Inflate expectations! Make sure everyone knows it’s the best therapy EVAR.
  • But don’t compare to existing therapies!
  • And reduce expectations of the trial: keep it small and call it a “pilot.”
  • Use a waiting list control group.
  • Analyse only subjects who finish, ignore the dropouts.
  • Measure “success” in a variety of ways, but report only the good news.

And so on. And many of these tactics leave no trace, or none that’s easy to find. So beware of citations to smaller studies with “good news” conclusions… even if there’s nothing obviously wrong with them. There’s a good chance they are still bogus anyway.

PainSci Member Login » Submit your email to unlock member content. If you can’t remember/access your registration email, please contact me. ~ Paul Ingraham, PainSci Publisher