BMJ article on rate of discrepancies in (subsequently) retracted scientific articles.

Cole et al, from Imperial College London, published a fascinating paper in the online BMJ on 20th September 2015. The online citation is BMJ 2015;351:h4708 and clicking here or pasting that link into your browser should take you to the paper at the BMJ site.

Retracted articles are papers published in peer-reviewed journals, that have subsequently been “withdrawn”, but which, of course, still form part of the published literature. In the papers studied by these authors, 46% of the retracted studies were withdrawn for research misconduct, 18% for errors subsequently detected, 14% for plagiarism, and 10% for duplication. In 12%, the reason for retraction was not discerned.

Discrepancies are things that may arise in any published paper, and can range from transcription and proof-reading errors, up to much more serious issues (such as falsified data). The authors give a list of possible discrepancies, including such things as “impossible percentages”, “factual discrepancies”, and “impossible summary statistics”.

There has been debate as to how important these discrepancies should be in our appraisal of a scientific paper – it is inevitable, after all, that minor arithmetic errors will occur from time to time. This paper, by Cole et al, offers a fascinating insight into the association between the rate of discrepancies in a paper, and the likelihood that it is a retracted paper.

The full results would take too long to duplicate here, but I will give two examples. Discrepancies were found in 84% of retracted papers and in 48% of the papers selected as controls – the overall discrepancy rate in retracted papers was 2.7 fold that of unretracted papers. In fact, the median number of discrepancies in the retracted papers was 4 (range 2 – 8.75) compared to a median of zero (range 0 – 5) in the controls.

It’s an interesting analysis, and the authors go on to perform a sensitivity and specificity analysis, as well as re-running their analyses with a tighter definition of a “clinical trial” (reflecting the fact that they thought some papers published as trials might not fit the relevant criteria as well as the journals had advertised). They then go on to suggest how these observations may help us to think further about the issues of quality and accountability in research. Well worth a read for anyone with an interest in this area.