PainScience.com Sensible advice for aches, pains & injuries
 
 

Studying the Studies

Tips and musings about how to understand (and write about) pain and musculoskeletal health science

updated (first published 2013)
by Paul Ingraham, Vancouver, Canadabio
I am a science writer and a former Registered Massage Therapist with a decade of experience treating tough pain cases. I was the Assistant Editor of ScienceBasedMedicine.org for several years. I’ve written hundreds of articles and several books, and I’m known for readable but heavily referenced analysis, with a touch of sass. I am a runner and ultimate player. • more about memore about PainScience.com

I am occasionally asked for advice on “how to do research” (this is usually how it’s framed) because PainScience.com has a reputation for diligence in this department — it’s one of the major distinguishing features of the site, along with an odd sense of humour and a peculiar obsession with salamanders. I read scientific studies and “translate” them for readers. I footnote my articles quite richly, where it counts, and I have a simply ginormous annotated bibliography.

I don’t do real research, of course. That’s for scientists. I hang out with scientists. I have beers with scientists. What I do is secondary research — not primary research in a lab, clinic, or the field. Secondary research is research about primary research.

Despite its second-class-citizen status, secondary research is an art and science in its own right, the core competency of the science journalist. There is not remotely a “right” way to do it — it’s so multi-faceted that I quite literally don’t know where to begin. So this page is a modest and random collection of thoughts and suggestions I’ve collected so far. I’ll add to it over time.

High standards

Set high standards for yourself. The defining feature of most reporting on science is that it sucks donkey balls. So be better. Be a perfectionist.

(But still try to have some fun — because the second biggest problem is that it’s incredibly easy to slip into a coma reading about this stuff.)

Top journals for pain and injury science

Which scientific journals publish the most and best randomized controlled trials of physical therapy treatments? I wish I’d had this information a decade ago.

From the Department of Now You Tell Me, the journal Physical Therapy published an extremely useful set of reading recommendations for geeky, science-loving manual therapists.1 Costa et al ranked journals by a variety of criteria, such as the sheer volume of relevant content and the prestige of the journal (“impact factor”). Using their lists of the top five for each category, it was easy for me to compile my own customized top ten list: I mashed up their various rankings into a single score, and then tweaked it to give greater weight to experimental quality and the subject matter which is of the greatest interest to me, my readers, and most manual therapists (musculoskeletal pain, rehab, etc).

The winners are …

  1. Archives of Physical Medicine and Rehabilitation
  2. Pain
  3. Physical Therapy
  4. Stroke
  5. Clinical Rehabilitation
  6. Spine
  7. Lancet
  8. British Medical Journal
  9. Journal of the American Medical Association

And the number one journal to read …

  1. Journal of Physiotherapy

Words of wisdom from the Physical Therapy article:

Physical therapists who are trying to keep up-to-date by reading the best available evidence on the effects of physical therapy interventions have to read more broadly than just physical therapy-specific journals. Readers of articles on physical therapy trials should be aware that high-quality trials are not necessarily published in journals with high impact factors.

Real professionals read journals

Those who don’t stay current with the literature think my perspective is contrarian.

Christopher Johnson, PT, www.Head2ToeSystems.com

I’d like to briefly make the case for reading journals (or at least the best blogs about journals).

Millions of professionals around the world are regularly trying to help people with serious and chronic musculoskeletal pain, and yet most have never cracked open a medical journal — not even the ones most specialized and appropriate to their jobs. (Massage therapists and chiropractors in particular are guilty of this; personal trainers and naturopaths, too.) Most of those that do dabble in a little research reading tend to be skimmers and cherry pickers, looking through abstracts for quick confirmation of their own views. The result is usually bogus citations for their clinic blog.

The world is full of anti-intellectual and anti-scientific flakes, and currently have little difficulty getting trained and licensed. My prescription for these professions is harsh: keep out the flakes. Gatekeeping with higher academic standards. If applicants can’t hack the science, they don’t get in the door, or they wash out early. Health care work should not be an option for amateurs with cute opinions like “I know what works” and “science doesn’t know everything.”2

Any serious healthcare profession knows this to be true. They have serious and credible entrance and competency standards for academic and clinical training. Professions that are don’t follow that model have always struggled for legitimacy and access. Having said that I think we should do more to help talented, smart and qualified LMTs, trainers and PTAs enter school and earn their DPT. I don’t think we do enough to facilitate that.

Jason Silvernail, DPT, DSc (online discussion)

Be one of those talented, smart professionals: read journals. But do it carefully. Because…

Research shows research is misleading (and yet it’s surprising that the situation isn’t even worse)

“I only read medical journals for the conclusions.”

Some of my readers often beg me to set my snarky, cranky, critical sights on Team Science for once. Okay. Here goes. (And it’s no big deal, of course, because science is quite neurotic and self-critical by nature. That’s why I love the big lug.)

Some French researchers went looking for misleading conclusions in scientific papers, and I’m sure you can guess where this is headed: nowhere good.3 As reported by Neil O’Connell for BodyInMind.org:

They looked for a selection of naughties: not reporting the results of the primary outcome, basing conclusions on secondary outcomes or the results of a sub-group analysis, presenting conclusions that are at odds with the data, claiming equivalence of efficacy in a trial not designed to test for it, and finally not considering the risk-benefit trade-off. Like a game of clinical trial Bullshit Bingo. They found evidence of misleading conclusions in 23% of reports. The only predictor of misleading conclusions was genuinely negative results. In trials with negative results the rate of misleading conclusions was, brace yourself, 45%.

Oh, is that all? Only 45%? I would have guessed about 70%. It sounds bad, but I really could not be any less surprised. Perhaps the conclusion is misleading?

This is hardly the first clue about this. Ioannidis famously taught us “Why Most Published Research Findings Are False” in 2005, even though the problem has been generally predictable since about 1795. And it’s obvious to nearly anyone who actually reads scientific papers, and not just their abstracts. That’s how I figured it out …

How I learned to distrust “conclusions” — a cautionary tale

Many moons ago, my journey towards the EBM light — that’s evidence-based medicine, if you haven’t been here before — began with a serious pain problem (iliotibial band syndrome) while I was training to be a massage therapist (an alarmingly evidence-free curriculum, alas). I had the important inspiration that, when the going gets serious, the serious go to the literature.

THE GREATEST THING EVER WRITTEN? In my books, yeah.

I really knew nothing about “the literature,” except that it was probably important. Indeed, all I really knew about science I’d learned from one book by Carl Sagan. I’d just finished reading Demon-Haunted World, which I was still fuming over, because I still wanted to believe in crop circles back then (no joke). It wasn’t until my second read a year later that I realized DHW was possibly THE GREATEST THING EVER WRITTEN.

So that was where I was as a “researcher” at the time that I actually started checking The Literature and reading scientific paper abstracts.

Abstracts were all I read, of course, and I took them all at face value. It was science! I was so impressed with myself just for using PubMed that it didn’t occur to me that abstracts are written by fallible people with agendas and funding worries and bosses and so forth. It was The Literature, dammit, and that was enough back then, and it was enough for me for at least a year. It was so easy! All I had to do was look shit up, and if an abstract had anything in it that sounded like it supported a point I was making, yahtzee! Make a footnote, instant credibility: citing science was like magic, ironically.

And then at some point I got particularly curious about some science and I actually read a whole scientific paper. I know, crazy, right? And guess what? The contents of that paper did not really square with the abstract. In fact, the paper’s “discussion” section and conclusion seemed distinctly at odds with the abstract.

Uh oh.4

Poop addendum: Neil wrapped up his description of this research by invoking a crap metaphor, as one does, to colourfully explain how the results of crap research often end up getting summarized in a way that makes them sound quite a lot better than they are: “You can’t polish a turd… but you can roll it in glitter.”5

Further down the rabbit hole

More highly relevant research also shows that study authors spin their own results — not just the media. Dr. Steve Novella wrote about it for Neurologica.

I have often had the impression that misleading spin in the discussion section of the paper is just as common as it is in the abstract — that is, the whole paper isn’t necessarily any better than the abstract. Recognizing that was just another step in the destruction of my innocence (1971–2003, RIP).

And it continued. Not only did I have to learn that the abstract doesn’t necessarily represent the paper, I also had to learn that the paper does not necessarily represent the data! And of course it goes deeper still, because the data may not represent reality. Down the rabbit hole!

That all might start to sound catastrophically embarrassing for science to some readers — is science hopeless? No: this is just why we never trust only one paper, or one source, and need lots (!) of replication and verification before getting cocky. “Proving” anything of any importance or complexity takes an enormous amount of checking, checking, checking from every angle for many years by many different researchers.

That’s just the job.

PubMed Research Tip: use the clinical Queries feature

PubMed is the “Google” for medical scientific articles. It’s a service of the U.S. National Library of Medicine, that includes tens of millions of citations from MEDLINE and other life science journals for biomedical articles back to the 1950s. PubMed includes links to full text articles and other related resources.

I often use PubMed hundreds of times in a month. It’s a resource that all health care professionals should be at least a little bit familiar with. Patients can get some use out of it as well. However, it can be overwhelming!

Here’s a truly valuable PubMed tip: Use the Clinical Queries feature of PubMed. That produces search results that are much better for clinicians, especially non-medical professionals. The basic science stuff (all the microbiology, say) all gets filtered out … which is a lot more intelligible for those of us who don’t wear lab coats for a living.

"Do people actually buy scientific papers?" The costs of full text costs, and scientific paper piracy

Open access (free) scientific papers are on the rise, and there’s a lot of good content available in publications like PLoS Medicine. Unfortunately, most scientific publishers still keep nearly all of their content behind a paywall, and charge fees for access to papers that are seem insane to the average person — usually more than $30 for a single paper — and a lot of science-friendly bloggers who would like to read them.

I actually pay those prices, occasionally, because it’s an important part of my job. But less and less these days, because I have a lot of friends with institutional access to papers. I’m really not sure who the market is, but the journals been pricing and marketing content like this consistently for more than a decade, through a period of great change in every kind of publishing. Either they are hopeless dinosaurs clinging to a stupid old way of doing things, leaving money on the table (totally plausible) … or they know something about their market and business that we don’t (maybe)… or a bit of both.

Perhaps they’ve asked too much for too long, because a major source of pirated papers has emerged with an idealistic justification: the “Pirate Bay of scientific papers.” (I would call it the Napster of scientific papers, betraying my advanced age.) A researcher in Russia has made more than 48 million journal articles — almost every single peer-reviewed paper every published — freely available on Sci-Hub.io, “the first website in the world to provide mass & public access to research papers.” ScienceAlert.com reports:

It’s not just the poor who don’t have access to scientific papers — journal subscriptions have become so expensive that leading universities such as Harvard and Cornell have admitted they can no longer afford them. Researchers have also taken a stand — with 15,000 scientists vowing to boycott publisher Elsevier in part for its excessive paywall fees.

In early 2016, the founder of Sci-Hub has been refusing to shut the site down, despite the inevitable a court injunction and a lawsuit from a huge scientific publisher. *munches popcorn* And so, for now, if you’re willing to enter that ethical gray zone, you can easily get access to most papers. If not…

If you can't get access to the full text, how much can you get from the abstract?

Almost every working day, I have to decide whether or not to pay for access to papers, or pirate them, and I routinely choose to settle for whatever I think I can safely wring out of the abstract. Is it enough? Can you really do justice to analysis of a scientific paper without reading the whole thing? Isn't it a little like reviewing a movie by watching the trailer?

Not only is it possible, it's necessary in many contexts — it's simply not possible to read all the relevant science, let alone pay for it. Fortunately, the stakes are not equal for all citations. When citing to support more important and controversial points, don't rely on the abstract — but for many other citations, the abstract is certainly enough, if you do it cautiously.

Here are some mitigating considerations and tips for abstract-based science journalism:

Comparing an abstract to a movie trailer is fun but poor. Abstracts have very different goals, and are not nearly as fragmentary and shallow as most trailers. In general, what actually matters in the paper is represented in the abstract.

Reader question: “Could you possible share a few ideas about how to stay current on relevant literature? What is your system?”

I use a lot of RSS feeds from journals, and scan those daily for relevant headlines, using good power tools, Reeder for Mac (an RSS reader) and Feedbin (a paid RSS subscription/syncing service and online reader). If you don’t know about RSS, see RSS in Plain English 5:00.

I also subscribe to the NEJM’s JournalWatch service which gives me good quality summaries.

But weirdly, social media is now probably the most important source of sources for me these days. I have a strong network of really smart colleagues who are also trying to stay current, and our collective efforts are extremely effective. As a group, we don’t miss much, and it’s often super clear what the most interesting recent papers are based on who shares what how often with how much enthusiasm. When a particularly intriguing and good quality paper is published, I’m going to find out!

So, cultivate virtual friendships with colleagues and mentors, and join Facebook discussion groups and fan pages where research gets discussed! They aren’t hard to find.


About Paul Ingraham

Headshot of Paul Ingraham, short hair, neat beard, suit jacket.

I am a science writer, former massage therapist, and I was the assistant editor at ScienceBasedMedicine.org for several years. I have had my share of injuries and pain challenges as a runner and ultimate player. My wife and I live in downtown Vancouver, Canada. See my full bio and qualifications, or my blog, Writerly. You might run into me on Facebook or Twitter.

Related Reading

I’ve written quite a lot over the years about science, research, and citing:

What’s new in this article?

2016Added a new section about how I find out about good new research.

A few unlogged updates.

Notes

  1. Costa LO, Moseley AM, Sherrington C, et al. Core Journals That Publish Clinical Trials of Physical Therapy Interventions. Phys Ther. 2010 Aug. PubMed #20724420. BACK TO TEXT
  2. “Science doesn’t know everything” is a classic, common non-sequitur from people defending quackery. It’s true but obvious, and irrelevant to their point…which is that their kooky treatment beliefs are so exotic that they are immune to investigation and criticism, beyond the reach of science. Nope! Not even close!

    It’s like declaring a leaky old canoe to be seaworthy because we don’t yet know everything about the depths of the ocean.

    BACK TO TEXT
  3. Mathieu S, Giraudeau B, Soubrier M, Ravaud P. Misleading abstract conclusions in randomized controlled trials in rheumatology: comparison of the abstract conclusions and the results section. Joint Bone Spine. 2012 May;79(3):262–7. PubMed #21733728. BACK TO TEXT
  4. I remember that sinking feeling quite well. “Oh my,” I thought. “I’m going to have to have a closer look at an awful lot of papers. I’m going to be busy well into middle age!” Checking the literature got a great deal less sexy all at once. And now, fifteen years later — and just getting started on middle age — I am cynically surprised that the rate of misleading conclusions in abstracts is only a piddling 45%. Of course, it probably is much worse with the kind of research I mostly deal with: trials of complementary and alternative medicine, and a lot of obscure or dodgy or simplistic pet-theory cures for common pain problems. BACK TO TEXT
  5. Fun imagery! But, oddly enough, you can polish a turd. It’s one of the more peculiar things the MythBusters have tested, demonstrating once again that you it’s important to check to see if reality actually works like you expect it to. Go Team Science! BACK TO TEXT