How to Reward Scientists for Doing Research that Matters

On Jan. 23, the blustery Monday following the presidential inauguration, a group of 21 scientific thought leaders met for a workshop in Washington D.C.  They were there to discuss a topic that they all cared deeply about — how to improve research quality worldwide, by changing how scientists are evaluated and rewarded.

Around the table, there were key influencers from across the research ecosystem — journal editors, research funders, physician-scientists, and academic leaders. The task of the day was to identify the fundamental problems with the current “rewards and incentives” system for scientists, then propose fixes. The meeting agenda was led by Steven Goodman, MD, MHS, PhD, a co-director of the Meta-Research Innovation Center (METRICS) at Stanford; Frank Miedema, the dean of the University Medical Center Utrecht in the Netherlands; and David Moher, senior scientist at the Ottowa Hospital Research Institute.

Scientists are not numbers

A central focus of the discussion was the overuse of performance metrics, specifically the statistical analysis of publications used to evaluate scientists’ work. It was felt that the quality of research suffers when scientists are evaluated too heavily on the impact factors of the journals in which their studies are published, and by the number of times other scientists cite an article in the footnotes of their articles.

This problem was best summed up by John Ioannidis, METRICS co-director, who wrote, “Emphasis on publication can lead to least publishable units, authorship inflation, and potentially irreproducible results.”

Overreliance on bibliometrics also dampens innovation. It gives an unfair advantage to late-stage researchers with large citation war chests, enabling them to consistently win research grants over young scientists armed with fresh ideas and proficiency in new methodologies. And it becomes a crutch for busy promotions and tenure committees, who might inappropriately use these metrics as tie breakers.

Miedema spoke about how his institution recently started moving away from this numbers game.

“Now when we review candidates for promotions, we ask for a short essay about who they are, asking them to elaborate on their achievements in five domains — academic responsibilities, such as journal reviews and committee work; time with students; clinical work; and community outreach,” said Miedema.

He also said that though this transition took “commitment and patience,” now the leaders at his center are embracing the new evaluation system, and they’ll be tracking its efficacy over time.

Rewarding scientists for sharing and risk taking

Many meeting participants discussed how important it is for researchers to share their initial research protocols and final data, and to publish both positive and negative results. It was felt that this reduces the waste associated with funding redundant studies and it has the potential to accelerate the overall rate of scientific progress.

But this comes at a cost to scientists in the current evaluation system. Scientists don’t get as much credit for publishing in open journals or in the “grey literature,” the clinical bulletins and policy documents that directly improve lives. And publishing negative findings, studies where a scientist’s starting hypothesis was wrong, is viewed by many as a career killer.

Marcia McNutt, president of the National Academy of Sciences shared an idea that helped one of her research institutions foster a culture of openness and risk taking: “When I was director of the Monterey Bay Aquarium Research Institute, at the start of every board meeting my directors would ask me about the most spectacular failure we had had since the last meeting. If I didn't have a good story to tell them, they would be very disappointed. They would tell me that we weren't taking enough risks.”

Then Ulrich Dirnagl, the director of Experimental Neurology at the University of Berlin, brought up another journal-driven reason why more scientists don’t share data. He called it “the paradox of nefarious battles.”

“It’s good for scientists to share knowledge prior to publication, but they don’t do it because journals won’t publish pre-released data. If you have a more open science system, you will foster collaboration rather than competition.”

Giving more credit to collaborators

As the world of science becomes more complex, more studies require large teams of specialists. Yet in the world of academic publishing, the first and last authors on a journal article receive the lion’s share of credit. The names in the middle of the author list, the collaborators, get very little recognition, even if their contributions are large and essential.

McNutt, previously editor-in-chief of the journal Science, discussed one change that would reward all collaborators and encourage more team efforts.

“At an upcoming retreat of editors-in-chiefs from prominent journals we will be discussing ways to tackle this problem. One of the proposals on the table is to make sure that every contributor in a study receives the equivalent of movie credits at the end of each article, so that everyone would know what they all did,” said McNutt. “That clearly assigns credit — and blame — where it’s due.”

And Paula Stephan, professor of economics at Georgia State University, suggested that the scientific community develop more high visibility awards for group efforts.

A call for change

As the workshop wound down, Paul Wouters, director of the Centre for Science and Technology Studies at Leiden University, proposed that institutions look to building system-level infrastructure and data collection that enables more responsible research behavior.

“Even devoting a small part of science funding to this would contribute to a positive feedback loop, so that you could then use those results to fuel cultural change in the system,” said Wouters.

Wouters lists more infrastructure ideas in an article he co-authored, “Bibliometrics: The Leiden Manifesto for research metrics.”

In closing, Chonnettia Jones, head of Insights and Analysis at the Wellcome Trust, reinforced the need for data to back up the ideas, “As a large research funder, we would be more attracted to stronger evidence in terms of better research practice. I think that would be one way to be able to get funders on board with these suggestions.”

This meeting was a first step in identifying rewards and incentives for researchers that can help to improve scientific research quality. METRICS is further vetting participant suggestions to prioritize proposed solutions and begin creating pilots to test potential interventions. For more information contact Debbie Drake Dunne, METRICS executive director, at debbiedd@stanford.edu.

Articles referenced:

Assessing Value in Biomedical Research: The PQRST of Appraisal and Reward
John P. A. Ioannidis, Muin J. Khoury
JAMA. 2014;312(5):483-484. doi:10.1001/jama.2014.6932
http://jamanetwork.com/journals/jama/fullarticle/1881107

Fewer numbers, better science
Rinze Benedictus, Frank Benedictus, Mark WJ Ferguson
Nature. 2016 Oct 27;538(7626):453-455. doi: 10.1038/538453a.
http://www.nature.com/news/fewer-numbers-better-science-1.20858

Bibliometrics: The Leiden Manifesto for research metrics
Diana Hicks, Paul Wouters, Ludo Waltman, Sarah de Rijcke, Ismael Rafols
Nature. 2015 Apr 23;520(7548):429-31. doi: 10.1038/520429a.
http://www.nature.com/news/bibliometrics-the-leiden-manifesto-for-research-metrics-1.17351