PainScience.com Sensible advice for aches, pains & injuries
 
 
bibliography * The PainScience Bibliography contains plain language summaries of thousands of scientific papers and others sources, like a specialized blog. This page is about a single scientific paper in the bibliography, Power 2011.

Exposing the evidence gap for complementary and alternative medicine to be integrated into science-based medicine

updated
Tags: mind, scientific medicine, controversy, debunkery

PainSci summary of Power 2011?This page is one of thousands in the PainScience.com bibliography. It is not a general article: it is focused on a single scientific paper, and it may provide only just enough context for the summary to make sense. Links to other papers and more general information are provided at the bottom of the page, as often as possible. ★★★☆☆?3-star ratings are for typical studies with no more (or less) than the usual common problems. Ratings are a highly subjective opinion, and subject to revision at any time. If you think this paper has been incorrectly rated, please let me know.

This paper is particularly interesting for its explanation of the “frustrebo” effect: “Negative true placebo effects (‘frustrebo effects’) in the comparison group, and cognitive measurement biases in the comparison group and the experimental group make the non-specific effect look like a benefit for the intervention group.” (A particularly excellent example of the frustrebo effect can be seen in Cherkin et al.)

original abstractAbstracts here may not perfectly match originals, for a variety of technical and practical reasons. Some abstacts are truncated for my purposes here, if they are particularly long-winded and unhelpful. I occasionally add clarifying notes. And I make some minor corrections.

When people who advocate integrating conventional science-based medicine with complementary and alternative medicine (CAM) are confronted with the lack of evidence to support CAM they counter by calling for more research, diverting attention to the 'package of care' and its non-specific effects, and recommending unblinded 'pragmatic trials'. We explain why these responses cannot close the evidence gap, and focus on the risk of biased results from open (unblinded) pragmatic trials. These are clinical trials which compare a treatment with 'usual care' or no additional care. Their risk of bias has been overlooked because the components of outcome measurements have not been taken into account. The components of an outcome measure are the specific effect of the intervention and non-specific effects such as true placebo effects, cognitive measurement biases, and other effects (which tend to cancel out when similar groups are compared). Negative true placebo effects ('frustrebo effects') in the comparison group, and cognitive measurement biases in the comparison group and the experimental group make the non-specific effect look like a benefit for the intervention group. However, the clinical importance of these effects is often dismissed or ignored without justification. The bottom line is that, for results from open pragmatic trials to be trusted, research is required to measure the clinical importance of true placebo effects, cognitive bias effects, and specific effects of treatments.

related content

These two articles on PainScience.com cite Power 2011 as a source:


This page is part of the PainScience BIBLIOGRAPHY, which contains plain language summaries of thousands of scientific papers & others sources. It’s like a highly specialized blog. A few highlights: