Academia's Incentive Problem

And ideas for how to fix it

I came across this tweet last week:

It got me thinking about some of the other problems I've heard about in the scientific community. I’ve written previously that I think science is the engine of human advancement. However, the recent Alzheimer's fraud and the ongoing replication crisis highlight some serious flaws in academia's incentive structures. If these problems aren’t addressed, we risk destroying the peer-review system that has guided scientific research for the past half century.

But first, some background:

What's happening in Alzheimer's research?

Last month, Science published a whistleblower report on potentially doctored images in key pieces of Alzheimer's research dating back to 2006. Basically, for the past two decades, AD research has focused on beta-amyloids as the cause of AD. Much of the focus comes from a 2006 study published in Nature which many in the research community considered to be the "smoking gun" for the amyloid hypothesis. The study has been cited over 2000 times and is the fifth most cited paper among all AD research reports since it was published.

For the past 16 years, a huge chunk of our time and money has been influenced, in part, by this report. The NIH spends $1.6B per year on amyloid-related projects, but there have been zero effective therapies. In light of the fraud allegations, we have people calling the amyloid hypothesis the "scientific equivalent of the Ptolemaic model of the Solar System."

So yeah, it’s pretty bad. What about the replication crisis?

The replication crisis

Reading about the AD fraud reminded me of something I'd heard in a podcast a few months back. The replication crisis is an ongoing issue in science where the vast majority of studies cannot be replicated. Researchers are unable to reproduce the experiment due to insufficient information/funding, or they can, but they fail to achieve the same results.

Given that 0.05 is the generally accepted p-value, we might expect 5% of studies to fail replication. However, one of the most extensive replication studies could only reproduce significant results in half of the studies tested.

That doesn't mean that half of all studies are wrong, but it should raise concerns about how we verify what we consider to be true and how we should use existing studies to guide future research.

Publish or perish

This is probably something people in academia are more familiar with, but I'll try my best. The Wikipedia page for publish or perish describes it as "the pressure to publish academic work in order to succeed in an academic career."

I think it's easiest to think of it as clout is directly correlated with the quantity of novel, important research you publish. No one gets famous for publishing null hypotheses or replicating studies. Journals are also biased against negative results, so your findings might not be accepted even if you are interested in replicating a study.

The result is a serious aversion to conducting some of the work necessary to maintain an effective peer-review system. A 2016 Nature study found that most researchers haven't even attempted to publish a replication study. The side effect of all this is a culture that goes against proving existing research wrong.

Over time, the most successful people will be those who can best exploit the system 

— Paul Smaldino, in an interview with Vox

This brings up important questions regarding how science as an institution functions. In academia, the peer-review system is crucial to producing research that has undergone scrutiny and verification. But how well is the peer-review system functioning if we let doctored images guide billions of dollars in funding and years of research? How can we say key findings are valid if no one can replicate them?

Academia's incentive structures have skewed behavior to favor publishing scientifically dubious studies and away from rigor and verification. If this behavior is limited, it may not be an issue. Journals can retract studies, and poor researchers will lose goodwill. But, when it is pervasive, it can lead to the "canonization of false facts" and systemic gaps in our knowledge. These are the greatest threats to scientific advancement because they are unknown unknowns that can derail decades of effort.

What do I mean by that? If we know our knowledge in an area is flawed, we can take steps to not only fill gaps in understanding but stop that faulty knowledge from being used elsewhere. The knowledge that you are wrong is valuable knowledge in and of itself. However, when you don't know you are wrong, not only do you fail to correct your understanding, but unsound theses are applied to future work, compounding the effects.

Even worse is when faulty knowledge becomes codified. When we no longer even think to question an idea and it becomes embedded in our thinking. The Ptolemaic model mentioned earlier is an example of this. A geocentric model of the universe was so widely that scholars who spoke out against it were ostracized.

A growing field of science is metascience, the study of the scientific process. Metascience seeks to use the scientific method to improve how we conduct research. I think this is great. If science is the engine of human advancement, we shouldn't just run the engine harder and faster; we should also look for ways to make the engine more efficient.

How can we solve these problems?

We should always be weary of widely accepted ideas, especially those influential in the meta. What science needs is 1) institutions to fund and support self-scrutiny and metascience and 2) a cultural shift to becoming more accepting of proving each other wrong.

Some initiatives already exist. PubPeer is an online community that allows researchers to review and give feedback on research after publication. In fact, PubPeer users were questioning the images in the 2006 Nature study long before Science began its investigation.

However, I think it would be cool to take this a step further. Here are some ideas I've been thinking about:

1. Grant-making institutions

Establish grants specifically for replication research. Some programs exist, but funding needs to expand dramatically. We should also establish programs targeting highly cited and significant works for replication to minimize the risk of systemic knowledge gaps.

The second part is what’s really missing from academia today. The supply of researchers eager and willing to devote their time to replication studies is already small. Grants should be distributed “effective altruism” style and prioritize research that:

  • Has the highest probability to be disproved

  • Would have the greatest effect on future research if sufficiently disproved

There is a movement known as slow science that emphasizes a slower, more methodical process to research. It seems like a way to minimize the social downside risk of research, e.g. preventing dangerous drugs from reaching the market, and focuses on vetting research before publication.

I’m not too sure I agree with this overarching principle—I think the best innovation comes from rapid iteration—but, a thorough post-publication scrutiny process can benefit science. We should still move fast, break things, and maybe even cut some of the unnecessary corners to produce life-saving therapies faster. But, fundamental and early-stage research needs to be properly vetted at the conceptual level.

Rather than improving the quality of research at the top of the funnel, I think it would be better to keep the funnel wide, while creating mechanisms to verify critical pieces that come out the funnel. The nice thing about replicating studies is that there is asymmetric value to verifying highly cited studies. The costs of replicating any study are the same, but it’s far more valuable to confirm the findings of a landmark report than some random undergrad’s thesis that no one will ever read. That way, we can keep the pace of research high, while still minimizing risks that faulty knowledge slips through and compounds.

Potential names for said institutions: Take Two Foundation, Go Again Grants, Remixed Research.

2. Built-in funding for replication studies

Create a new type of grant for novel research that sets aside money for future replication work, if a study reaches a certain level of influence/importance (perhaps measured in number of citations). That way, new research will automatically have funding for others who want to replicate it.

This would also help with a conceptual shift in how we view research. The peer-review process shouldn’t stop at publication or discussion. Replication should become part of the research process, rather than an optional step.

3. Bounty programs

Offer monetary and status rewards for research teams that disprove highly cited research. In cybersecurity, bug bounty programs help organizations identify vulnerabilities in code. A similar program for academia would probably be more effective because academic bounty hunters aren't faced with the dilemma between exploiting a bug for profit and claiming the bounty. Maybe award a Nobel prize to a bounty hunter every once in a while.

Bounty programs that confer status could also change the stigma against proving your colleagues wrong. As someone who hasn't been in academia, I can't say much about changing the internal culture. But, I'd like to think that people with intellectual integrity are the ones you'd most want to work with. People like Matthew Schrag, the scientist who blew the whistle on the fraudulent Alzheimer's research, should be heroes in the scientific community, not pariahs.