PainScience.com Sensible advice for aches, pains & injuries
 
 

microblog

Taking out the trash: purging predatory journals from my bibliography

Paul Ingraham ARCHIVEDMicroblog posts are archived and rarely updated. In contrast, most long-form articles on PainScience.com are updated regularly over the years.

Recently I shared a study on social media (Ghorbanpour et al). It seemed to be an unusually low quality paper, and soon after I posted it I was informed that it was published in a suspected “predatory journal” — a fraudulent journal that will publish anything for pay (literally anything, even gibberish). They are scams, ripping off academics who are desperate to publish or perish. I just blogged about predatory journals a few weeks ago:

The scientific literature is severely polluted with actual non-science, with an insane number of papers that were published under entirely false pretenses, the fruit of fraud.

Although I’ve been aware of this debacle for several years, I have not paid close enough attention to be aware of how to identify predatory journals. I assumed I didn’t have to worry about it. But the journal in this case was on a list of sixty suspected predatory journals in the rehabilitation field specifically, put together by Manca et al. That’s my turf! If that list is trustworthy (which seems likely), it’s a depressing but invaluable resource for me. Since then I’ve learned about other lists (see PredatoryJournals.com and BeallsList.weebly.com).

My next job was to audit my own bibliographic database for those bogus journals. The bibliographic database for PainScience.com contains 2450 papers. How many are the spawn of predatory journals? How heavily have I relied on their unreliable conclusions? Not a comfortable chore! But an unavoidable one.

Here’s what I found lurking in my bibliography

Obviously I was well aware that these were shabby papers. Which isn’t really surprising, because these days — for many reasons — most papers are “guilty until proven innocent.” I deal with bad papers all the time. I just didn’t know that these papers were not really published.

And how were those bad sources cited?

My real concern was that I might discover that I’d used some worthless studies to support a personal bias. Was I using non-science to make any important points?

Nope! I mostly passed this test. Here’s how they were used:

That last one is the most interesting of the batch: I actually read Ravichandran et al quite carefully just a few months ago, and wrote a thorough summary of it, in which I slammed the authors for spinning their data to make the results look more positive than they were. I think it’s “one of the few clearly negative trials” of massage for trigger points, which rubs my bias the wrong way: I want to believe that massage helps trigger points! So I’m actually smugly pleased to see this paper discredited.

So that's a dozen citations to papers that are completely useless, but — phew — I didn't rely on any of them heavily for anything that mattered.

The remaining papers, which I had not yet gotten around to citing, and now never will:

All of these will remain in my bibliography, but their quality will be prominently questioned, and all will have the 1-star ratings that I apply only to “bad example” papers. I’ll remove most of the citations to them, as examples of the lack of support for a claim.

No doubt there’s more

I did my initial search for predatory journals in my own database before I discovered other lists of predatory journals, so I have more auditing to do, and I fully expect to find more of these festering pustules in my bibliography. However, based on these preliminary results, I suspect I won’t be too horrified by what I find.

And I will now be systematically checking the origins of every significant new citation. PainScience.com will never knowingly cite anything from a predatory journal ever again, except as a bad example.

Could these papers have some value?

Is it overkill to disqualify them entirely? It is theoretically possible for a good paper to end up in a predatory journal, but there’s know way for us to separate those from the rest. I think publication in a predatory journal almost completely undermines the credibility of a paper. Even in legit journals, with flawed but earnest peer review, we have an appalling problem with underpowered crappy little trials, the p-hacking epidemic, and so on. Peer review is deeply flawed, and in some journals it’s not much a lot better than the rubber stamp at a predatory journal, but in any half decent journal it’s a lot better than nothing.

Without it, I think the value of a paper and the credibility of its authors drops to near-zero. They might have good intentions, but they certainly don’t have good judgement. It casts doubt on the value of all their research, wherever and whenever it is published — it’s a serious stain on their record.

This is the MICROBLOG: small posts about interesting stuff that comes up while I’m updating & upgrading dozens of featured articles on PainScience.com. Follow along on Twitter, Facebook, or RSS. Sorry, no email subscription option at this time, but it’s in the works.