Detailed guides to painful problems, treatments & more

Studying the Pain Studies

Tips and musings about how to understand (and write about) the extremely flawed science of pain and musculoskeletal medicine

Paul Ingraham • 20m read

I am occasionally asked for advice on “how to do research” (this is usually how it’s framed) because PainScience.com has a reputation for diligence in this department — it’s one of the major distinguishing features of the site, along with an odd sense of humour and a peculiar obsession with salamanders. I read scientific studies and “translate” them for readers. I footnote my articles quite richly, where it counts, and I have a simply ginormous annotated bibliography.

I don’t do real research, of course. That’s for scientists. I hang out with scientists. I have beers with scientists. What I do is secondary research — not primary research in a lab, clinic, or the field. Secondary research is research about primary research.

Despite its second-class-citizen status, secondary research is an art and science in its own right, the core competency of the science journalist. There is not remotely a “right” way to do it — it’s so multi-faceted that I quite literally don’t know where to begin. So this page is a modest and random collection of thoughts and suggestions I’ve collected so far. I’ll add to it over time.

Those who don’t stay current with the literature think my perspective is contrarian.

Christopher Johnson, PT

High standards

Set high standards for yourself. The defining feature of most reporting on science is that it sucks donkey balls. So be better. Be a perfectionist.

(But still try to have some fun — because the second biggest problem is that it’s incredibly easy to slip into a coma reading about this stuff.)

Top journals for pain and injury science

Which scientific journals publish the most and best randomized controlled trials of physical therapy treatments? I wish I’d had this information a decade ago.

From the Department of Now You Tell Me, the journal Physical Therapy published an extremely useful set of reading recommendations for geeky, science-loving manual therapists.1 Costa et al ranked journals by a variety of criteria, such as the sheer volume of relevant content and the prestige of the journal (“impact factor”). Using their lists of the top five for each category, it was easy for me to compile my own customized top ten list: I mashed up their various rankings into a single score, and then tweaked it to give greater weight to experimental quality and the subject matter which is of the greatest interest to me, my readers, and most manual therapists (musculoskeletal pain, rehab, etc).

The winners are …

  1. Archives of Physical Medicine and Rehabilitation
  2. Pain
  3. Physical Therapy
  4. Stroke
  5. Clinical Rehabilitation
  6. Spine
  7. Lancet
  8. British Medical Journal
  9. Journal of the American Medical Association

And the number one journal to read …

  1. Journal of Physiotherapy


Words of wisdom from the Physical Therapy article:

Physical therapists who are trying to keep up-to-date by reading the best available evidence on the effects of physical therapy interventions have to read more broadly than just physical therapy-specific journals. Readers of articles on physical therapy trials should be aware that high-quality trials are not necessarily published in journals with high impact factors.

Real professionals read journals

One of the places I dreaded most in graduate school was the “new journal desk” in the library, where all the science journals received the previous week were displayed, thousands of pages of them. Everyone would circle around it, teetering on the edge of panic attacks. All that available information seemed to taunt us with how out of control we felt—stupid, left behind, out of touch, and overwhelmed.

Robert M Sapolsky, Why Zebras Don’t Get Ulcers, 2004 p. 403

I’d like to briefly make the case for reading journals (or at least the best blogs about journals).

Millions of professionals around the world are regularly trying to help people with serious and chronic musculoskeletal pain, and yet most have never cracked open a medical journal — not even the ones most specialized and appropriate to their jobs. (Massage therapists and chiropractors in particular are guilty of this; personal trainers and naturopaths, too.) Most of those that do dabble in a little research reading tend to be skimmers and cherry pickers, looking through abstracts for quick confirmation of their own views. The result is usually bogus citations for their clinic blog.

The world is full of anti-intellectual and anti-scientific flakes, and they currently have little difficulty getting trained and licensed. My prescription for these professions is harsh: keep out the flakes. Gatekeeping with higher academic standards. If applicants can’t hack the science, they don’t get in the door, or they wash out early. Health care work should not be an option for amateurs with cute opinions like “I know what works” and “science doesn’t know everything.”2

Any serious healthcare profession knows this to be true. They have serious and credible entrance and competency standards for academic and clinical training. Professions that are don’t follow that model have always struggled for legitimacy and access. Having said that I think we should do more to help talented, smart and qualified LMTs, trainers and PTAs enter school and earn their DPT. I don’t think we do enough to facilitate that.

Jason Silvernail, DPT, DSc (online discussion)

Be one of those talented, smart professionals: read journals. But do it carefully. Because research shows that research is misleading!

Research shows research is misleading (and yet it’s surprising that the situation isn’t even worse)

Some of my readers often beg me to set my snarky, cranky, critical sights on Team Science for once. Okay. Here goes. (And it’s no big deal, of course, because science is quite neurotic and self-critical by nature. That’s why I love the big lug.)

Some French researchers went looking for misleading conclusions in scientific papers, and I’m sure you can guess where this is headed: nowhere good.3 As reported by Neil O’Connell for BodyInMind.org:

They looked for a selection of naughties: not reporting the results of the primary outcome, basing conclusions on secondary outcomes or the results of a sub-group analysis, presenting conclusions that are at odds with the data, claiming equivalence of efficacy in a trial not designed to test for it, and finally not considering the risk-benefit trade-off. Like a game of clinical trial Bullshit Bingo. They found evidence of misleading conclusions in 23% of reports. The only predictor of misleading conclusions was genuinely negative results. In trials with negative results the rate of misleading conclusions was, brace yourself, 45%.

Oh, is that all? Only 45%? I would have guessed about 70%. It sounds bad, but I really could not be any less surprised. Perhaps the conclusion is misleading?

This is hardly the first clue about this. Ioannidis famously taught us “Why Most Published Research Findings Are False” in 2005, even though the problem has been generally predictable since about 1795. And it’s obvious to nearly anyone who actually reads scientific papers, and not just their abstracts. That’s how I figured it out …

How I learned to distrust “conclusions” — a cautionary tale

Many moons ago, my journey towards the EBM light — that’s evidence-based medicine, if you haven’t been here before — began with a serious pain problem (iliotibial band syndrome) while I was training to be a massage therapist (an alarmingly evidence-free curriculum, alas). I had the important inspiration that, when the going gets serious, the serious go to the literature.

THE GREATEST THING EVER WRITTEN? In my books, yeah.

I really knew nothing about “the literature,” except that it was probably important. Indeed, all I really knew about science I’d learned from one book by Carl Sagan. I’d just finished reading Demon-Haunted World, which I was still fuming over, because I still wanted to believe in crop circles back then (no joke). It wasn’t until my second read a year later that I realized DHW was possibly THE GREATEST THING EVER WRITTEN.

So that was where I was as a “researcher” at the time that I actually started checking The Literature and reading scientific paper abstracts.

Abstracts were all I read, of course, and I took them all at face value. It was science! I was so impressed with myself just for using PubMed that it didn’t occur to me that abstracts are written by fallible people with agendas and funding worries and bosses and so forth. It was The Literature, dammit, and that was enough back then, and it was enough for me for at least a year. It was so easy! All I had to do was look shit up, and if an abstract had anything in it that sounded like it supported a point I was making, yahtzee! Make a footnote, instant credibility: citing science was like magic, ironically.

And then at some point I got particularly curious about some science and I actually read a whole scientific paper. I know, crazy, right? And guess what? The contents of that paper did not really square with the abstract. In fact, the paper’s “discussion” section and conclusion seemed distinctly at odds with the abstract.

Uh oh.4

Poop addendum: Neil wrapped up his description of this research by invoking a crap metaphor, as one does, to colourfully explain how the results of crap research often end up getting summarized in a way that makes them sound quite a lot better than they are: “You can’t polish a turd … but you can roll it in glitter.”5

Further down the rabbit hole

More highly relevant research also shows that study authors spin their own results — not just the media. Dr. Steve Novella wrote about it for Neurologica.

I have often had the impression that misleading spin in the discussion section of the paper is just as common as it is in the abstract — that is, the whole paper isn’t necessarily any better than the abstract. Recognizing that was just another step in the destruction of my innocence (1971–2003, RIP).

And it continued. Not only did I have to learn that the abstract doesn’t necessarily represent the paper, I also had to learn that the paper does not necessarily represent the data! And of course it goes deeper still, because the data may not represent reality. Down the rabbit hole!

Basically, there’s just an unbelievable number of ways trials can go wrong. Or be made to go wrong by “researchers” with virtually no real training in scientific methodology trying to prove their pet theories.6

That all might start to sound catastrophically embarrassing for science to some readers — is science hopeless? No: this is just why we never trust only one paper, or one source, and need lots (!) of replication and verification before getting cocky. “Proving” anything of any importance or complexity takes an enormous amount of checking, checking, checking from every angle for many years by many different researchers.

That’s just the job.

I used to rely on meta-analysis, but they are worse than laws & sausages, ceasing to inspire respect in proportion as we know how they are made.

Dr. Mark Crislip, "I Never Meta Analysis I Really Like"

PubMed Research Tip: use the clinical Queries feature

PubMed is the “Google” for medical scientific articles. It’s a service of the U.S. National Library of Medicine, that includes tens of millions of citations from MEDLINE and other life science journals for biomedical articles back to the 1950s. PubMed includes links to full text articles and other related resources.

I often use PubMed hundreds of times in a month. It’s a resource that all health care professionals should be at least a little bit familiar with. Patients can get some use out of it as well. However, it can be overwhelming!

Here’s a truly valuable PubMed tip: Use the Clinical Queries feature of PubMed. That produces search results that are much better for clinicians, especially non-medical professionals. The basic science stuff (all the microbiology, say) all gets filtered out … which is a lot more intelligible for those of us who don’t wear lab coats for a living.

"Do people actually buy scientific papers?" The costs of full text costs, and scientific paper piracy

Open access (free) scientific papers are on the rise, and there’s a lot of good content available in publications like PLoS Medicine. Unfortunately, most scientific publishers still keep nearly all of their content behind a paywall, and charge fees for access to papers that are seem insane to the average person — usually more than $30 for a single paper — and a lot of science-friendly bloggers who would like to read them.

I actually pay those prices, occasionally, because it’s an important part of my job. But less and less these days, because I have a lot of friends with institutional access to papers. Also, you can often ask for and receive them from free from the authors: they are allowed to do that, and “delighted.” Dr. Holly Witteman (@witteman):

That $35 that scientific journals charge you to read a paper goes 100% to the publisher, 0% to the authors. If you just email us to ask for our papers, we are allowed to send them to you for free, and we will be genuinely delighted to do so. https://t.co/NHEfiOMLfG

— Dr. Holly Witteman (@hwitteman) July 6, 2018

I’m really not sure who the market for the full-price papers is, but the journals been pricing and marketing content like this consistently for more than a decade, through a period of great change in every kind of publishing. Either they are hopeless dinosaurs clinging to a stupid old way of doing things, leaving money on the table (totally plausible) … or they know something about their market and business that we don’t (maybe)… or a bit of both.

Perhaps they’ve asked too much for too long, because a major source of pirated papers has emerged with an idealistic justification: the “Pirate Bay of scientific papers.” (I would call it the Napster of scientific papers, betraying my advanced age.) A researcher in Russia has made more than 48 million journal articles — almost every single peer-reviewed paper every published — freely available on Sci-Hub.se, “the first website in the world to provide mass & public access to research papers.” (Note that the domain name changes constantly over time. It’s always “Sci-Hub.something,” but you usually have to Google “sci-hub” to find the current domain name.) ScienceAlert.com reports:

It’s not just the poor who don’t have access to scientific papers — journal subscriptions have become so expensive that leading universities such as Harvard and Cornell have admitted they can no longer afford them. Researchers have also taken a stand — with 15,000 scientists vowing to boycott publisher Elsevier in part for its excessive paywall fees.

Since early 2016 the founder of Sci-Hub has been refusing to shut the site down, despite the inevitable court injunction and a lawsuit from a huge scientific publisher. *munches popcorn* And so, for now, if you’re willing to enter that ethical grey zone, you can easily get access to most papers. If not …

If you can’t get access to the full text, how much can you get from the abstract?

Almost every working day, I have to decide whether or not to pay for access to papers, or pirate them, and I routinely choose to settle for whatever I think I can safely wring out of the abstract. Is it enough? Can you really do justice to analysis of a scientific paper without reading the whole thing? Isn’t it a little like reviewing a movie by watching the trailer?

Not only is it possible, it’s necessary in many contexts — it’s simply not possible to read all the relevant science, let alone pay for it. Fortunately, the stakes are not equal for all citations. When citing to support more important and controversial points, don’t rely on the abstract — but for many other citations, the abstract is certainly enough, if you do it cautiously.

Here are some mitigating considerations and tips for abstract-based science journalism:

Comparing an abstract to a movie trailer is fun but poor. Abstracts have very different goals, and are not nearly as fragmentary and shallow as most trailers. In general, what actually matters in the paper is represented in the abstract.

Reader question: “Could you possible share a few ideas about how to stay current on relevant literature? What is your system?”

I use a lot of RSS feeds from journals, and scan those daily for relevant headlines, using good power tools, Reeder for Mac (an RSS reader) and Feedbin (a paid RSS subscription/syncing service and online reader). If you don’t know about RSS, see RSS in Plain English 5:00.

I also subscribe to the NEJM’s JournalWatch service which gives me good quality summaries.

But weirdly, social media is now probably the most important source of sources for me these days. I have a strong network of really smart colleagues who are also trying to stay current, and our collective efforts are extremely effective. As a group, we don’t miss much, and it’s often super clear what the most interesting recent papers are based on who shares what how often with how much enthusiasm. When a particularly intriguing and good quality paper is published, I’m going to find out!

So, cultivate virtual friendships with colleagues and mentors, and join Facebook discussion groups and fan pages where research gets discussed! They aren’t hard to find.

It’s a trap! Things to watch out for

New section in mid-2018, which I’ll slowly expand, as is my way.

Predatory journals. Since the 1990s, a shockingly high percentage of research is published without peer review, virtually guaranteeing poor quality. Even real peer review is imperfect, but when it’s missing entirely? •shudder• There is now an actual industry — a huge industry — of fraudulent journals that publish-for-pay.89 When evaluating a paper, always check for signs of a fake journal: no significant history, barely-there web presence, amateurish and sloppy, no impact factor, and so on.

“Active” controls. Many non-drug treatments are tricky to test scientifically, because it’s hard to compare them to a true fake treatment (like a comparing a drug to a sugar pill). Many scientists resort to comparing a treatment to some other treatment that we hope is “neutral” — maybe helpful, but not very helpful. Comparing a test treatment to an “active control” is likely to be misleading, because the active control pollutes the experiment with all kinds of unknown variables. These kinds of studies “need to be interpreted with caution.”10 It doesn’t mean that they are all wrong, but it’s a significant and basic limitation that has to be taken seriously. And “these kinds of studies” constitute a large percentage of studies in musculoskeletal medicine …

There’s a lot more, and I’ll write about more of them eventually. Meanwhile, here are some other good sources:

“How to prove that your therapy is effective, even when it is not: a guideline”
Cuijpers et al. Epidemiol Psychiatr Sci. Volume 25, Number 5, 428–435. Oct 2016.

Infographic titled “A Rough Guide to Spotting Bad Science, from CompoundChem.com.

A Rough Guide to Spotting Bad Science

This is a good list. There are some specific concerns about medical science that aren’t covered here, but it’s about 85% applicable. It was put together by CompoundChem.com, which seems to produce lots of high quality infographics about chemistry.

About Paul Ingraham

Headshot of Paul Ingraham, short hair, neat beard, suit jacket.

I am a science writer in Vancouver, Canada. I was a Registered Massage Therapist for a decade and the assistant editor of ScienceBasedMedicine.org for several years. I’ve had many injuries as a runner and ultimate player, and I’ve been a chronic pain patient myself since 2015. Full bio. See you on Facebook or Twitter., or subscribe:

Related Reading

I’ve written quite a lot over the years about science, research, and citing:

What’s new in this article?

2018 — Added tip about asking scientific paper authors for free full-text.

2018 — Started a new section: “It’s a trap! Things to watch out for.” Just a couple items to start.

2016 — Added a new section about how I find out about good new research.

2014, 2015 — A few unlogged updates.

2013 — Publication.

Notes

  1. Costa LOP, Moseley AM, Sherrington C, et al. Core Journals That Publish Clinical Trials of Physical Therapy Interventions. Phys Ther. 2010 Aug. PubMed 20724420 ❐
  2. “Science doesn’t know everything” is a classic, common non-sequitur from people defending quackery. It’s true but obvious, and irrelevant to their point … which is that their kooky treatment beliefs are so exotic that they are immune to investigation and criticism, beyond the reach of science. Nope! Not even close!

    It’s like declaring a leaky old canoe to be seaworthy because we don’t yet know everything about the depths of the ocean.

  3. Mathieu S, Giraudeau B, Soubrier M, Ravaud P. Misleading abstract conclusions in randomized controlled trials in rheumatology: comparison of the abstract conclusions and the results section. Joint Bone Spine. 2012 May;79(3):262–7. PubMed 21733728 ❐
  4. I remember that sinking feeling quite well. “Oh my,” I thought. “I’m going to have to have a closer look at an awful lot of papers. I’m going to be busy well into middle age!” Checking the literature got a great deal less sexy all at once. And now, fifteen years later — and just getting started on middle age — I am cynically surprised that the rate of misleading conclusions in abstracts is only a piddling 45%. Of course, it probably is much worse with the kind of research I mostly deal with: trials of complementary and alternative medicine, and a lot of obscure or dodgy or simplistic pet-theory cures for common pain problems.
  5. Fun imagery! But, oddly enough, you can polish a turd. It’s one of the more peculiar things the MythBusters have tested, demonstrating once again that you it’s important to check to see if reality actually works like you expect it to. Go Team Science!
  6. Cuijpers P, Cristea IA. How to prove that your therapy is effective, even when it is not: a guideline. Epidemiol Psychiatr Sci. 2016 Oct;25(5):428–435. PubMed 26411384 ❐
  7. Special pleading is an informal fallacy: claiming an exception to a general trend or principle without actually establishing that it is, either using a thin rationalization or even just using the exception as evidence for itself (“the rules don’t apply to my claim because my claim is an exception to the rule”).

  8. Beall J. What I learned from predatory publishers. Biochem Med (Zagreb). 2017 Jun;27(2):273–278. PubMed 28694718 ❐ PainSci Bibliography 52870 ❐

    ABSTRACT


    This article is a first-hand account of the author's work identifying and listing predatory publishers from 2012 to 2017. Predatory publishers use the gold (author pays) open access model and aim to generate as much revenue as possible, often foregoing a proper peer review. The paper details how predatory publishers came to exist and shows how they were largely enabled and condoned by the open-access social movement, the scholarly publishing industry, and academic librarians. The author describes tactics predatory publishers used to attempt to be removed from his lists, details the damage predatory journals cause to science, and comments on the future of scholarly publishing.

  9. Gasparyan AY, Yessirkepov M, Diyanova SN, Kitas GD. Publishing Ethics and Predatory Practices: A Dilemma for All Stakeholders of Science Communication. J Korean Med Sci. 2015 Aug;30(8):1010–6. PubMed 26240476 ❐ PainSci Bibliography 52903 ❐ “Over the past few years, numerous illegitimate or predatory journals have emerged in most fields of science.”
  10. Travers MJ, Bagg MK, Gibson W, O’Sullivan K, Palsson TS. Better than what? Comparisons in low back pain clinical trials. Br J Sports Med. 2018 Feb. PubMed 29420237 ❐

Permalinks

https://www.painscience.com/articles/research-tips.php

PainScience.com/research_tips

linking guide

4,750 words