Or see the help page for answers to common customer questions
Reader question: Could you possible share a few ideas about how to stay current on relevant literature? What is your system?
I use a lot of RSS feeds from journals, and scan those daily for relevant headlines, using good power tools, Reeder for Mac (an RSS reader) and Feedbin (a paid RSS subscription/syncing service and online reader). If you don’t know about RSS, see RSS in Plain English 5:00.
I also subscribe to the NEJM’s JournalWatch service which gives me good quality summaries.
But weirdly, social media is now probably the most important source of sources for me these days. I have a strong network of really smart colleagues who are also trying to stay current, and our collective efforts are extremely effective. As a group, we don’t miss much, and it’s often super clear what the most interesting recent papers are based on who shares what how often with how much enthusiasm. When a particularly intriguing and good quality paper is published, I’m going to find out!
So, cultivate virtual friendships with colleagues and mentors, and join Facebook discussion groups and fan pages where research gets discussed! They aren’t hard to find.
I’ve updated and expanded PainScience.com more in 2016 than the last three years put together. In theory, I’m supposed to blog about updates, telling readers what’s new on the site, but I just can’t keep up: there’s just too many updates! So I’m now blogging about them en masse.
I am also now logging those updates, all of them — no longer just a select few. Readers can now see a list of updates at the end of dozens of featured articles: every correction and new citation, anything added or removed.
Like good footnotes, this sets PainScience.com apart. When’s the last time you read a blog post and found a list of 30 updates and upgrades made to that page over a period of several years? This transparency is in the spirit of the editing history available for Wikipedia pages. Footnotes are more useful for readers, but the update logs are important: they demonstrate an auditable long-term commitment to quality and accuracy. Although they are “fine print,” I think they are more meaningful than 98% of the comments that most Internet pages waste pixels on.
As I get serious about update logging, I also get better stats. Here are some highlights from 2016:
The new articles are:
Some of the most heavily updated and revised articles are:
A post for the pros today, summarizing some excellent treatment principles (described in detail in Don't Freak Out: Treating Pain with Simple Fundamentals, by Greg Lehman). These are all great points, but the most neglected and important is probably “4. Pain is more about sensitivity than about injury.”
This sure sounds great (maybe a little too great): stimulate vagus nerve with an implant, et voila, less systemic inflammation. There’s broad biological plausibility here, but almost no evidence. So far this hinges only on the results of Koopman et al., who tested it on humans and reported that “these results establish that vagus nerve stimulation targeting the inflammatory reflex modulates TNF production and reduces inflammation in humans.”
Established, eh? Not without replication! Which is obviously badly needed here. All kinds of data hijinks could be hiding in a study that technical.
My main concern is the use of the word “significantly” in the abstract, without any details (effect size in particular). All too often that wording, without clarification, means there was a statistically significant but clinically trivial result. With many treatment trials I can go digging for the effect size to confirm, but not here, the reading is too difficult for me to form any meaningful impression without spending an hour, and even then it might not be clear. And even if the paper does indicate a clinically meaningful result it’s still got “too good to be true” written all over it and may well prove to be difficult to reproduce.
But it’s a genuinely interesting topic, I think (vagus nerve stimulation in particular, and systemic inflammation in general).
One of my colleagues shared an example of particularly bad, overconfident physical therapy on Facebook (doesn’t matter what, there’s a zillion of ‘em). Sigurd Mikkelsen, a Norwegian physiotherapist of my acquaintance, composed this beautiful comment, which would apply to any claim of miraculous treatment efficacy:
The ultimate paradox of pain and therapy: the problem is not that nothing works. The problem is ANYTHING can work… usually just long enough for someone to empty their wallet!”
That’s pure &#@!% poetry right there. Bloody brilliant. Dozens of us therapy wonks liked it, so much so that Adam Meakins made this picture out of it:
But I’d like to elaborate a bit for my readers, so here’s the wordier version…
These days it seems like science is telling us that “nothing works”: every study of practically every treatment method seems to be bad news. But there’s this maddening contradiction between all that bad news and what professionals and patients are routinely experiencing, which is that almost anything seems to work, at least at first. This glaring disconnect between clinical experience and science has caused many arguments. But as the science creeps forward, the key to the puzzle has been coming into focus…
It’s not quite that “nothing works,” it’s that anything really can work but usually only temporarily, because that is the nature of pain. No matter what’s causing it, pain can be tuned by any comforting and reassuring experience, and good therapists can cleverly fiddle with ten thousand variables to create that experience for their clients, creating potent illusions of efficacy... but, in most cases, the benefits don’t last long, or they last just long enough for natural recovery to assert itself, creating a strong impression of a true cure. Generations of therapists have made a living by “amusing the patient while nature cures the disease,” creating an endless stream of elaborate treatment rationales, entire methods of therapy, commercial empires, all based on the idea that they are “fixing” something, when in fact 95% of it is just theatrical, irrelevant variations on the same basic principle. They almost all work a little bit for a while for the same reason, but everyone’s selling a different reason. Therapists really are helping people... but not the way most of them think.
Sigurd condensed all that down to an artful handful of words. My own explanation, much wordier, may be helpful to many people. But boy did he nail it! Thanks, Sigurd.
I asked Sigurd if I had translated his comment well. Here’s his reply, worthwhile in its own right:
That is fantastic, Paul! Thanks a lot for elaborating on the content of that paradox, and you’re of course spot on. I love how you put it directly into its daily-life setting — that there is a contradiction between that constant bullying by evidence and what therapists/patients experience. What happens then is again succinctly summarized by Voltaire in that quote — the art of medicine/therapy is amusing the patient while nature cures the disease. The twist here, is that this will happen whether or not the therapist or the patient knows that they’re in that exact theatrical play.
From Humphrey and Skoyles;
”When people recover from illness under the influence of fake treatments, they must of course in reality be healing themselves. But if and when people have the capacity to heal themselves by their own efforts, why do they not simply get on with it? Why ever should they wait for third-party permission — from the shaman or the sugar pill — to heal themselves?”
Anyway, the dilemma here is - so what? Should I just go open up a coffee shop instead, “because nothing matters and nothing is true anymore”, or is there another way?
To quote (since we’re already in that quoty’ mood) Peter O’Sullivan:
“...we have to change what we value in a consultation. The advice we give and the strategies we empower people with are maybe way more important than the (manual) techniques we apply.”
“...but I’m in there with my hands. Because touch is a powerful communication tool that can guide people to safely move. The (manual) skills are very useful, but the thinking is different.”
Since anything can work, we need to take ethical and sustainable decisions about how to use manual touch, for what goal. It is then much more about becoming a manual “strategist” or a “process manager” about the processes and phases ahead in a therapeutic pathway. And that I find hugely rewarding.
Postural laziness is what people picture when they think of poor posture. Thanks to the Puritans.
Most people are at least dimly aware that sexual uptightness is a Puritan thing, that the Puritans bequeathed England and her colonies with the notion that pleasure is evil … and what’s more pleasurable than sex? (Possibly massage, and I doubt they liked that either.) Few people know that the Puritans also gave us the idea that rigid posture implies moral righteousness and strength of character. Postural laziness is a great moral failing in the Puritanical world view, which still pollutes the cultural DNA of modern civilization to a shocking degree. People still exaggerate the value of “good posture” for this reason, mostly unconsciously.
This is a brief excerpt from Does Posture Correction Matter? Posture correction strategies and exercises … and some reasons not to care or bother, Footnote #9. Years after writing that passage, along comes this to illustrate it:
Pain is a lot like this — it is warped by our expectations and point of view. Unlike a clever model, though, we can’t turn it around to see what’s really going on. And trying to see through the illusion, trying to believe that there’s nothing much actually wrong with our tissues (often true), is even more difficult than seeing through these illusions.
Nevertheless, that is what therapy and rehab are all about: trying to change our expectations and point view with interesting new sensations and movements.
Aside from the analogy to pain, these are just fantastic illusions. Thanks to Nick Ing of Massage & Fitness Magazine for pointing out the video. For more about the slippery weirdness of pain perception, see Pain is Weird: Pain science reveals a volatile, misleading sensation that is often more than just a symptom, and sometimes worse than whatever started it.
I got some interesting gripes when I posted last week that “correlation kinda does imply causation.” One Facebook commenter said he’d only ever heard that correlation doesn’t “equal” causation; another skeptic thought it was “not a very helpful explanation”; and Dr. David Colquhoun tweeted at me: “Hmm dangerous”; and of course several smartypants were quick to remind me of the many amusing examples of spurious correlations that can (and have) been mined from data.
All of this points to an inescapable conclusion: I probably screwed something up. But not this bit…
The common wording was not what I screwed up. The standard phrase does indeed employ “imply.” Although “equal” does get used occasionally, “imply” is the more common usage for sure. Also, try the Google search autocomplete results for “correlation does not ____.”
I should have made this super clear on the first try, so allow me to overcompensate today:
The human knack for inferring causation is fantastically unreliable and our failures in this department are legion and disastrous. By far the most important thing anyone needs to understand about the relationship between correlation and causation is that **A** did not necessarily cause **B** just because **B** followed **A**, and making this mistake is one of the Greatest Hits of human thinking glitches.
This problem has been emphasized ad nauseum by so many smart people for so long that I personally just kind of take it for granted, and so I wrote my post last week without bothering to make it clear enough. As it probably should be every time correlation is discussed, because, as Barker Bausell put it (Snake Oil Science), we have a problem with “confusion between correlation and cause on an industrial scale.”
It was just some intellectual musing on my part. My griping about “imply” was not original. I was paraphrasing Edward Tufte, an American statistician who made the same point quite a while ago. So I’m in good company. Tufte suggested that a good informal re-wording would be, “Correlation is not causation but it sure is a hint.” I just wanted to make that same point, and I should have cited him, but I was in a hurry (penny wise and pound stupid, because now this is all taking me three times as long as if I’d just done it right in the first place).
I was mostly keen on the curious mental phenomenon of causality inference. It’s fascinating how aggressively the human mind infers causality from adjacent events… and how often we get it right about simple things. Exactly how much we get it right depends totally on the context and domain. We get causality right constantly when the variables are simple and readily observable; we get rarely get it right in health care, or any other complex endeavour, where the variables counts are high and many are subjective or otherwise murky.
I also wrote about this last week because I wanted to separate two things that are often mixed up: the inference of causality and the attribution of mechanism. General versus specific causes, basically. We can and routinely do correctly detect causes when correlation gives us a strong enough hint, but we routinely screw up exactly what caused what.
Most people will assume that when a very stubborn old pain goes away during a one-hour acupuncture session that the experience must have caused the relief, because the relief followed the experience. And that assumption is probably correct. The appearance of relief probably isn’t a coincidence, probably not just regression to the mean (too quick).
But then most people will then (carelessly or self-servingly) move on to another assumption: that the treatment caused the relief because acupuncture works as advertised. (It doesn’t.)
We can be right about the causality in a wide view — somehow or other, that appointment really did lead to feeling better, so yay — but still be hopeless wrong about what specifically caused what. Most people will ignore the possibility that the true mechanism of relief was not the efficacy of acupuncture, but the efficacy of a caring professional promising aid and performing fascinating rituals that reek of implied potency: the power of “surely no one would do this if it didn’t work!” These factors are wildly underestimated by most acupuncture patients. And acupuncturists.
Causality inference is a potent defining feature of human intelligence. It serves us well in many situations. Our ability to suss out how things work is largely based on this “one weird trick” that our brains can do. Flick the switch, light turns on: probably causally related! Touch fire, get burned… throw rock, break window… eat too much, get sick. There are countless simple correlations like this that we master effortlessly before we can even tie our shoes. We see A follow B and we just kinda get it that A caused B, just like humans somehow understand pointing, but most dogs will just lick your finger.
But we also constantly get it wrong, unfortunately.
There’s a famous rule: “correlation does not imply causation.” Unfortunately, it’s wrong, *as normally stated.
It’s missing an important word. It really should be: “correlation does not necessarily imply causation.” Because correlation actually does “imply” causation, and many (if not most) events that occur in sequence that appear to be causally related are in fact causally related. Human brains are dazzlingly good at correctly inferring causal relationships from observed correlations: clapping makes noise, braking stops cars, hot coals burn fingers. This mental super power served us well as we grew up as a species.
The problem is that we’re so good at it, and it’s such an essential mental skill, that we tend to overdo it and perceive causation in all kinds of situations where causality detection is much harder… like evaluating the results of medical treatments.
Complex causal relationships are as tricky to infer from simple observations as simple ones are easy. And we are just pathetically bad at figuring out exactly how events are causally connected — “mechanism of action.” Because of all the unknown variables. What’s really going on in a casual relationship almost always turns out to be different and waaaaay more complicated than we thought.
Nature: defying “common sense” since the dawn of intelligence.
But humans are causality bloodhounds: we smell it everwhere, even when we don’t understand what’s really going on (which we usually don’t). For instance, if someone who’s been limping and grimacing for days walks out of a massage appointment with a grin and a light step, then, yeah, massage probably did cause that result, one way or another.
[Update: see several important clarifications in the next post.]
When movement is limited by pain for too long, could the pain become a conditioned response to the movement? Rather than an accurate indication of the tissue state? Like Pavlov’s dogs salivating in response to a bell instead of food.
This article by Ben Cormack of Cor-Kinetic explores the potential to “recalibrate” painful movement by gradually breaking the association between the movement and pain with the “5 R’s of Rehab”:
I’m fascinated by this idea and think it has a lot of value, but I also wonder if the case for a primarily conditioned painful response is a bit overstated. Is that really a thing? I don’t doubt that it is possible, but is it common? I have clinically witnessed (and personally experienced) many chronically painful movements that really did not seem like a conditioned response persisting long after the resolution of any problem in the tissue. I think a stubborn source of tissue-driven pain (nociceptive pain) is highly plausible in many cases.
Just thinking out loud here.