Or see the help page for answers to common customer questions
But then I went and wrote a whole article about that particularly interesting idea: that people have often believed in “treatments” that are actually hurting them, like drinking mercury, and several other famous historical examples. If people have told a lot of success stories about dangerous treatments… well, that says a lot about just how wrong anecdotal evidence can be.
In 2015, a huge bibliography and “good footnotes” still set PainScience apart.1My footnotes contain either extra commentary and whimsical asides, or citations to science and other sources, like this:
Woolf CJ. Central sensitization: Implications for the diagnosis and treatment of pain. Pain. 2010 Oct;152(2 Suppl):S2–15. PubMed #20961685. PainSci #54851. And I’ve just upgraded them.
Bibliographic data does not play nicely with modern publishing technology. There’s lots of software for wrangling references on your PC, but it’s still almost impossible to integrate them (efficiently) into blogs and websites. It still has to mostly be done “manually.”
I started investing in a good system to solve this problem in 2005, but it had to be custom job — pure original programming. The result has been a quietly awesome and esoteric boon to my publishing business ever since. My referencing gets noticed by many visitors to the site, because there’s nothing else quite like it anywhere else online. Footnotes in long-form online writing are sparse and spartan.
But it was time for some improvements. Got stay ahead of the curve. This is about what I did, and some behind-the-scenes details.
Pulling this off tooks weeks of work spread out over months, but almost everything is better:
That last one was particularly challenging and interesting. Why “Vancouver”? My town?
PainScience isn’t a medical journal, and all of my readers in the early days were “just folks,” so why put on airs? I made my references simple: just the essential information, uncluttered with stuff like 82(123):987-93. (But always with a link to those details, so that a reference could be checked up on — that’s very important, even for just folks.)
It was idealistic and modern. I was declaring what I thought web referencing should be like.
Plus, it is really hard to mass produce the formatting of detailed bibliographic data. So I didn’t.
It’s hard to get taken seriously when you don’t act serious. Regular readers aren’t bothered by formality and detail, not in a footnote, but special visitors — proper experts — definitely notice if I don’t speak their referencing language. Those influential readers won’t know that I’m being clever and innovative and democratic.
“They’ll just think you don’t know how to reference properly,” my wife said about five years ago.
“And they’ll be right,” I said. “I don’t really have a clue. It looks nasty.”
“Time to learn, maybe.”
“Maybe. I should get started on the procrastinating, at least.”
And then I started getting email from people who agreed with my wife. My writing was getting noticed. My website was getting big. Getting referencing right mattered more with each passing year.
[better citations needed]
About six months ago I Googled “standard medical journal referencing format.” After all these years, I wasn’t even sure if there was such a standard. That’s when I discovered “the Vancouver system,” and it kicked off months of work. I had to re-tool the footnote factory & re-train all the bibliography gnomes. (Weirdly, I felt much more comfortable diving into this Sysyphean chore simply because the new standard was named after where I lived.)
I had to re-tool the footnote factory, and re-train all the bibliography gnomes.
So, why Vancouver?
In 1978, editors of medical journals from around the world met in the city of Vancouver, probably close to where I live, and thrashed out the unbelievably numerous details. It was so difficult and tedious that they named the standard after the city they were trapped in. Their work is still the standard today, and it is heavily documented.
I encourage you to click that link and scroll for a while and behold my nightmare. The Vancouver system is about as user-friendly as a swarm of cranky wasps.
All my references are generated from a bibliographic database in the fairly exotic BibTeX data format. Every footnote is lovingly crafted by software — essential for mass production. I had to reprogram that software to speak “Vancouver style.”
In programming, an “edge case” is something that requires special handling. You might have 50 lines of code that do something with 95% of your data, and then another 50 lines of code just to cope with a few awkward exceptions in the last 5%. Bibliographic data seems like it’s all edge. It is inherently rotten with the influences of other languages, the strange ways of databases, and oddball conventions grandfathered in from Ye Olde Dayes. My own data collection methods varied over the years: what I kept and how I kept it has come back to haunt me.
I lost entire days to minutiae like accented characters, abbreviations, pagination punctuation, title casing methods, and so on. I nearly lost my mind trying to either hammer my data into the necessary shape, or write clever subroutines that would do the equivalent, or both.
And fairly often I just flat-out disagreed with the Vancouver system. It has some major flaws. My goal was to look like I know the standard — and I do now — but I’m not going to follow it off a typographic cliff.
So here’s an old citation…
Vibe-Fersum et al. Efficacy of classification-based cognitive functional therapy in patients with non-specific chronic low back pain: A randomized controlled trial. European Journal of Pain. 2013. PubMed #23208945.
There’s only one author listed, the journal spelled out instead of abbreviated, and there’s no date, volume, and issue data. Here’s the same paper cited Vancouver style…
Vibe-Fersum K, O'Sullivan P, Skouen JS, Smith A, Kvåle A. Efficacy of classification-based cognitive functional therapy in patients with non-specific chronic low back pain: A randomized controlled trial. Eur J Pain. 2013 Jul;17(6):916–28. PubMed #23208945.
Okay, admittedly it’s not a very exciting difference when all is said and done. It’s getting thousands of those to look right automatically that’s the impressive part.
There’s about 2500 items in the bibliography these days. 400 or so got some extra care and editing during the big upgrade. The 100 best and most interesting are listed here. (I can generate lists like this very easily, one of the superpowers of the system: I just tell the database to generate a score for each record by awarding points for things like how recent is, study quality, summary length, and keywords like “fun” and “odd” and “classic” and “good news.”)
I’ve been discussing some surprising bad-news scientific evidence with my readers on Facebook and Twitter: Wiltshire et al found that post-exercise massage actually impaired muscle blood flow and lactic acid removal (not that anyone should actually want their lactic acid removed).
As usual, some folks bristle at the “negativity” of sharing bad science news about massage, and have shot back with claims like this one: “massage is more likely to improve recovery then impair it.” I actually share that faith. I believe that, one way or another, whatever it is that we love about massage probably results in a net benefit after a workout, at least a modest one. I think. I hope.
Unfortunately, the evidence doesn’t support my belief. “Aiding recovery” is the Holy Grail of sports massage, and it barely exists.
Way back in 1997, when I was just starting to study massage therapy, Tiidus proposed that facilitation of recovery by massage would require an effect on the following:
And he concluded:
Because manual massage does not appear to have a demonstrated effect on the above, its use in athletic settings for these purposes should be questioned.
There’s been a smattering of more encouraging evidence about it since then, like Franklin 2014, which fairly persuasively showed that “massage therapy restores peripheral vascular function after exertion” (seems like a good thing). The best of the encouraging studies is probably Zainuddin 2005. But it all amounts to just damning sports massage with faint praise, and there’s still plenty of bad news (like the train wreck of hype over “reducing inflammation,” see Massage does not reduce inflammation and promote mitochondria). After many years of following the research, I’ve decided that “it’s mostly a myth that delayed onset muscle soreness (DOMS) can be effectively treated by massage.” And surely that’s what “aiding recovery” is mostly about. If sports massage doesn’t clearly reduce DOMS and the associated weakness, it’s not of much use.
Given all of that, I’m not sure where my optimistic belief could possibly be coming from at this point! There are still other vaguely plausible mechanisms, other than good ol’ placebo: maybe post-exercise massage migitates the formation of so-called “trigger points”? It’s possible, but I’m not aware of any good evidence to support that, and there are certainly reasons to doubt it.
I like massage so much that, just like some of my critics, I refuse to heed the evidence, and stubbornly seek out massage, especially when I’m stiff and sore (though not if I’m too, stiff and sore). And not too intense, which may do more harm than good. Fortunately, I can live with my hypocrisy — it’s an over-rated sin anyway.
There’s an exasperating contradiction in modern chronic pain management. The entire point of cognitive-behavioural therapy for pain is that pain can be managed — or why bother? — and yet much of the modern literature focusses on the notion of reducing “suffering” rather than pain, based on the premise that pain is unavoidable. Dr. Lorimer Moseley:
I found this situation quite confusing — ‘pain can be modified by our beliefs and behaviours’ seems inconsistent with ‘pain cannot be relieved by modifying beliefs and behaviours’. I also think that this approach of ‘conceding that we can do nothing for pain’ seems inconsistent with what we now know about the underlying biological mechanisms of pain — that pain is fundamentally dependent on meaning.
And the meaning of pain can often be changed. Not that it’s easy! But it is possible, and that is the point of trying to help people understand that their pain doesn’t necessarily mean they are in any danger.
People assume that insurance companies are so savvy and parsimonious that they would never cover health services that weren’t effective. This premise is often used as a substitute for scientific evidence by alternative medicine advocates: surely such infamously tight-fisted corporations wouldn’t pay up if it wasn’t worthwhile, right? That’s almost better than science! Follow the money!
But insurance companies do not have secret methods of determining the efficacy of unproven treatments, out of the reach of science. The industry has a long history of insuring the treatments people want; they get sucked in by the same hype that their clients are sucked in by. Remember, insurance companies may be infamously tight-fisted, but they also have to sell insurance. They want to attract new customers, and placate existing ones. They will only drop coverage of a treatment when the absence of evidence and/or evidence of absence reaches a critical mass, or if cost ineffectiveness starts to become a glaring problem.
We may think intuitively that there is a downstream positive impact for people who use these benefits [mainly massage therapy, but also chiropractic and physiotherapy]—it makes them feel better, so arguably their usage of other benefits [health care services] should be lower than other plan members. But in our study, when we looked at those who use massage and chiropractic and compared their drug costs to others who didn’t use them, we found their drug costs are, in actual fact, higher.
This is completely at odds with what most people probably assume: an insurance company saying that they have data that strongly suggests that massage therapy and chiropractic may not worth paying for, because those patients do not have reduced health care costs later on (not their drug costs, anyway).
And yet they’ve been paying for it anyway. Why? Probably because they’d have a major marketing and PR problem if they refused to pay for massage therapy and chiropractic! These are popular services. If only popularity actually meant something…
The point of this post is not whether or not the insurance company is correct. It is not about whether massage therapy and chiropractic are actually cost-effective in the long run. We don’t know. They don’t know. The quoted position of an insurance company is not good evidence of that one way or the other. It’s worth noting, but it could easily be quite misleading… and, in fact, I suspect it is!
The point of this post is that an insurance company was having some surprising self-doubt about the value of paying out this particular benefit. People like to assume insurance companies “know” because they have a finely tuned sense of value. The only evidence I’ve presented here is evidence that they do not know. They pay for services mainly because people want them. And, contrary to what most people would expect, here’s a company that actually fears that perhaps they should not be paying for massage and chiropractic... but is still doing it anyway, at least for now. I think that’s inherently interesting.
But it doesn’t tell us whether or not massage therapy and chiropractic patients actually do end up spending more on drugs in the long run.
I’ve been learning a lot more about pain-killing drugs lately, but mostly the kind you can get without a prescription. The opioids are another whole world of risks and benefits. I stumbled on this good summary of how pain-killers can backfire and cause pain instead of treating it, which seems like a particularly important thing to understand:
Yes, the drugs used to treat pain can also cause pain. If you have been using opioid drugs for years and the pain keeps getting worse and worse, this vicious pain cycle could be a result of opioid-induced hyperalgesia. Because the opioids turn your natural pain relieving system off, your body is left without enough chemicals in the system as the drug wears off every four to six hours. This cycle causes a frequent roller coaster of up’s and down’s that sensitizes the nervous system to the point that you feel more pain. Not only do you feel more pain, you feel anxious, restless and have trouble sleeping. If this sounds familiar, then it is time to find an exit strategy off the opioid roller-coaster that you are on.
For four more pain-causing drugs, see “Top 5 Drugs That Can Cause Pain,” by Christina Lasich, MD.
So apparently this bit of conventional wisdom about weight training is bogus: “Eating protein soon after a workout will help build muscle.”
For years I believed it in that way that we believe things that matter to us…but don’t matter quite enough to have ever done a myth check. Better late than never!
And, yes, the headline (“Workout nutrition is a scam”) is a bit hyperbolic and clickbaity. The actual article is very specific — protein timing for bodybuilding — and it’s well written.
This article about placebo is about how placebo allegedly works even when people know that it’s a placebo — placebo without deception. This is based on a well-known 2010 study of placebo for irritable bowel syndrome. The article is about that researcher’s plans to study open-label placebos for cancer next. And that’s really all the article has to offer: it’s basically a press release for the research sequel to the original placebo-hyping blockbuster. There is no new research, yet. It was just an advertisement for the idea.
I don’t like the idea.
Ironically, Kaptchuk’s 2010 evidence of placebo without deception was itself deceptive (or at least disingenuous), because patients were still given a clear reason to have faith in what they were given. In this way, the 2010 study was well-crafted to produce a placebo effect, just by raising expectations with a different kind of deception. And Kaptchuk will do the same with the next study, with completely predictable results. And it will be a big deal, mark my words… because cancer. And because everyone loves the “power” of placebo!
I explain the problem with the 2010 study in more detail in my placebo article.
“Efficacy” is how well a treatment works in ideal circumstances, such as in a carefully contrived scientic test. Unfortunately, real life is rarely ideal! (You may have noticed.) “Effectiveness” is how well the same thing works in typical clinical settings and patients’ lives. Which is what matters to most patients.
A classic example of efficacy versus effectiveness is strong calorie restriction for losing weight: anyone who substantially restricts calorie intake over long periods will lose weight, but it’s a serious error to assume that it’s just a matter of discipline. Low calorie diets are so difficult to sustain for so many reasons, many of them out of our control, that most people will gain back any weight they do manage to lose. They are efficacious, but also notoriously ineffective.
A recent editorial in the British Journal of Sports Medicine explains that exercise (in a physical therapy context) is well-known to be efficacious, but may not be effective. That is, it works well when tested in the lab, but not so much for real patients. Again, effectiveness is what matters to patients! If effectiveness is low, only a few lucky and/or disciplined people can realistically expect to benefit.
For example, I recently shared a perfect example of this problem: Ylinen et al found fairly good evidence that strengthening is probably efficacious for many people with chronic neck pain… but maybe only if they stick to it for as long as a year. That takes a lot of discipline, and many people will fail at it. (And some will fail even if they stick to it, because not all cases respond to strengthening.) So even if it does work in a scientific test, can we say “it works”? It’s ambiguous.
I think there’s a large gray zone here: some exercise therapies are much more difficult and impractical than others. When they are truly efficacious, they are an opportunity for determined patients with frustrating chronic pain: an intervention that might really work, but onlyif you’re up to a substantial challenge! Many people aren’t…but you can be some will be. And for those patients only, “it works.”
See Kenny Venere’s more detailed discussion of effectiveness and efficacy.
I love it when The Onion does health stuff: “Area Man's Knee Making Weird Sound.”
COLORADO SPRINGS, CO—Noting that it began happening just a few days ago, local 31-year-old Anthony Forster told reporters Monday that his left knee has been making a really strange sound lately. “It’s like a little clicking noise—can you hear it?” said Forster, as he repeatedly flexed his knee back and forth in an effort to demonstrate the unusual sound. “You have to get really close and listen for it. It usually happens when I bend my knee all the way back and—there, did you hear that? It was doing it worse before, but you can still hear it.” At press time, sources confirmed a small blood clot just above Forster’s knee had broken loose and was traveling through his bloodstream to his brain, where it is expected to cause a massive stroke, killing him instantly.
Nice touch with the hypochrondiac flourish at the end.