Health & Environment

The Credibility Crisis In Scientific Publishing

A Texas A&M public health researcher examines journal research quality assurance methods in the field of addiction.
By Rae Lynn Mitchell, Texas A&M University School of Public Health July 29, 2019

In recent years, a growing number of published studies in fields such as psychology, epidemiology and clinical research have been found to have problems with their credibility. One manifestation of this is publication of results that cannot be reproduced by independent researchers.

Studies have investigated the sources of this credibility crisis in many disciplines and ways to combat it, finding that selective reporting of results and flexible data analysis are often to blame. It is especially concerning when the results of such flexible analyses are written up for publication in a manner that suggests they emerged from prespecified, confirmatory hypothesis-testing. To date, there has been little focus on credibility issues in the addiction research field.

In a new study published in the journal AddictionDennis Gorman, PhD, professor in the Department of Epidemiology and Biostatistics at the Texas A&M School of Public Health, examined how many of the top journals in addiction research have adopted various measures to ensure that the results of published articles are credible. In this study, Gorman analyzed 38 high-impact addiction journals and used information from the Center for Open Science and journal author instructions to determine the extent to which each journal had adopted six common methods for ensuring article quality.

The three main sources of credibility problems seem to stem from the use of data analysis methods that are more likely to give positive results, producing study hypotheses after the results are already known and selectively reporting only positive results. In many cases these practices can be attributed to the pressures researchers face to publish positive results, which many journals favor, and the highly competitive nature of academic research. Additionally, conflicts of interest, both financial and non-monetary, can sometimes play a role. Statements to disclose conflicts of interest are one method for improving research credibility.

The other procedures include use of specific guidelines for reporting results, registration of clinical trials, systematic reviews and other study designs, open sharing of data and peer-reviewed registered reports that evaluate study methods before data are collected and analyzed.

Gorman found that all but one of the 38 journals used at least one of these methods. The most common was requiring conflict of interest disclosure, which all but one journal required. Nearly half of the journals studied required pre-registration of clinical trials, but only four required registration of other studies. Such pre-registration can be useful as it demonstrates that a hypothesis was formed before results have been obtained, and measures and statistical analyses prespecified in detail before data are collected. Of the 38 journals studied, 28 had data sharing policies, although these only encouraged data sharing rather than mandating it as a requirement of publication. Additionally, 13 of the journals recommended the use of publishing guidelines when writing up studies for publication.

Each of these publication procedures can improve research quality, but Gorman noted that enforcement can be challenging. This is especially true as some have been implemented only as suggestions rather than requirements and because some are difficult to enforce and can be minimally adhered to with no adverse consequences for publication (for example, incomplete or vague information about study design and methods can be entered into trial registries). Regardless, knowing about conflicts of interest and study designs can be helpful in ensuring study quality and integrity, as can increased use of requiring that hypotheses be documented before data are collected.

However, this study came with some limitations, such as the journal instructions to authors being reviewed by a single individual, the fact that the list of journals came from a single source and the study focus on a single point in time. Further studies are needed to more fully explore the ways different journals use such publication procedures and see how their adoption changes over time. Additionally, Gorman recommends that addiction-related journals start requiring conflict of interest disclosures that include non-financial conflicts and make greater use of data sharing and methods to ensure hypotheses are publicly prespecified before data are collected and that results are written up following standard reporting guidelines.

Journals should clearly differentiate exploratory research from confirmatory research. Being clear that the results of a confirmatory study come from a carefully planned and prespecified analysis and not from selective reporting of flexible data analysis is crucial to the maintenance of research credibility and integrity. Registered reports and detailed preregistration are the publication procedures that “lock-in hypotheses and data analyses and allow differentiation of prespecified confirmatory analyses from other types of data analyses.”

This article by Rae Lynn Mitchell originally appeared in Vital Record.

Related Stories

Recent Stories