Two sections of a January 31 meeting of the President’s Council of Advisors on Science and Technology (PCAST) addressed the symptoms and causes of irreproducibility of scientific data. William Press, PCAST Member and Professor of Computer Science and Integrative Biology at the University of Texas at Austin opened the discussion noting that if scientific experiments are not reproducible, “it strikes at the very heart of what science does.” Panelists described publication bias, little recognition of the value of negative research studies, lack of control groups and inappropriate statistical data analysis as they discussed the recent trend in “irreproducible science.”
Glenn Begley, Chief Scientific Officer and Senior Vice-President for Research and Development at TetraLogic Pharmaceuticals spoke about how careers are built on scientists’ publication record and “positive results are rewarded” in top tier journals, which “want simple compelling stories.” He acknowledged that researcher promotion and grant funding are based on number of publications, so “the greatest likelihood of change is going to come from the journal and granting agencies.”
Donald Berry, Biostatistics Professor at the University of Texas MD Anderson Cancer Center spoke about regression issues and the expectations of scientists as they observe an experiment but base their observations on previous observations and results. He discussed issues of bias and of statistically reproducible results as he advocated for researchers and reviewers to adhere to a standardized protocol. He described instances where predictable scientific results may not be reproducible.
Daniel MacArthur Assistant Professor at Harvard Medical School and Massachusetts General Hospital and Associate Member of the Broad Institute noted the need for post-publication robust and real-time discussions of controversial results. He recommended that workshops on scientific standards develop consensus protocols and that software users understand potential areas where systematic errors could affect experimental outcomes. He also noted the need for raw data to be made available as well as the software and code used to produce that data so researchers can conduct reproducibility studies to determine the validity of the experimental findings.
Discussions following this panel of researchers included topics of generalization of scientific results. Software development and the need for statistical training were also addressed as were internal review to address systematic data issues.
PCAST also heard from three editors on the issue of improving scientific reproducibility. Marcia McNutt, Editor in Chief of Science spoke about the role of universities, funding agencies and journals in addressing issues of replication of scientific data. She noted that information is sometimes withheld in journal publications and addressed how scientific bias can affect research results. Science has taken steps to address this by adding members with statistical backgrounds to their Board of Reviewing Editors. Universities, she stated, should “reward researchers who produce reproducible results and withhold rewards from researchers who produce non-reproducible research.” Funding agencies should have reproducibility criteria and should preferentially support research that is reproducible. Lastly she suggested that scientific societies consider honoring those who produce reproducible research, devote sessions at national meetings that address best practices in reproducibility, and adopt guidelines on reproducibility for society publications.
Philip Campbell, Editor in Chief of Nature spoke about the incentive structure and culture of science as he noted the growth in formal corrections of scientific research. Improving access to supplemental information for scientific studies and removing length limits on online methods publications were some of the ways in which Nature is addressing reproducibility concerns and raising awareness of these topics.
Members of PCAST discussed whether workshops on the issue of reproducibility could include more examples of papers. Technical corrections and moderated informal comments were debated as possible ways to avoid problems in published papers. Researcher responsibility within the scientific community and to the general public was also addressed.