Menu
Aeon
DonateNewsletter
SIGN IN

Science funding is a gamble so let’s give out money by lottery

<p><em>Jay Galvin/Flickr</em></p>

Jay Galvin/Flickr

i

by Shahar Avin + BIO

A colourful Wheel of Fortune with numbered segments around the word “SPIN” at the centre, illuminated against a dark background.

Jay Galvin/Flickr

Perhaps your life, like that of many of my friends and relatives, has been improved by propranolol – a beta-blocker that reduces the effects of stress hormones, and that’s used to treat conditions such as high blood pressure, chest pain, an uneven heartbeat and migraines. It’s considered one of the most important pharmaceutical breakthroughs of the 20th century.

Thank goodness, then, that the United States in the 1940s didn’t have the same attitude to science funding that it does today. If it had, you could expect to see seven experts sitting around a table, trying to assign a score to an unorthodox grant proposal to study the function of adrenaline in the body. ‘If I have properly understood the author’s intent, then this mechanism has already been settled, surely,’ a senior physician might say. A lone physiologist mounts a defence, but the pharmacologists in the room are dismissive, with one who remarks that the mathematics ‘look cumbrous and inconvenient’. So the pathbreaking research of the late Raymond Ahlquist, a professor at the Medical College of Georgia who laid the foundations for the discovery of propranolol, could easily end up with low marks, and his theories would never see the light of day.

Science is expensive, and since we can’t fund every scientist, we need some way of deciding whose research deserves a chance. So, how do we pick? At the moment, expert reviewers spend a lot of time allocating grant money by trying to identify the best work. But the truth is that they’re not very good at it, and that the process is a huge waste of time. It would be better to do away with the search for excellence, and to fund science by lottery.

Superficially, the grant-giving process seems rational. Following an application deadline, academics assess and rank the proposals they’ve received. For example, members of a molecular biology review panel might find themselves weighing up a proposal to investigate a new biochemical pathway that’s potentially relevant to Alzheimer’s disease against a request to screen large protein datasets that could give rise to new treatments for diabetes. Each reviewer gives the proposal a score, and the scores are averaged across reviewers. Grants are awarded from the highest average mark downwards, stopping at the point at which the money runs out.

One big problem with this approach is that the monetary cut-off point still tends to be way above the quality cut-off point. Even though money for research has been generally increasing, the number of researchers is growing even faster. As a consequence, success rates for applicants have been falling, and adventurous proposals rarely get funded. A review panel in the 1970s might have been able to fund 40 per cent of applications, which meant it could support all of the excellent, solid proposals and still take a few risky bets. Today, a review panel can often fund 20 per cent or less of proposals submitted, leaving little chance for the likes of Ahlquist to secure funding.

Peer review adds another layer of irrationality. Sir Mark Walport, the UK government’s chief scientific adviser and the former director of the Wellcome Trust, the UK’s largest philanthropic funder, has labelled peer review a folie à deux because it relies on the researcher and the reviewer sharing a delusional belief in their capacity to make accurate predictions.

On the part of the applicant, she is forced to commit to a plan of action and a set of objectives or ‘deliverables’, most of which are probably quite hazy at the outset. Research, after all, is about finding out what you don’t know, so it’s a pretty messy and unscriptable process. The systems biologist Uri Alon, in a TED talk, has likened science to improvisational theatre. You might think you’re going from A to B, but halfway there you get lost, stumble around, completely forget what you’re even doing there – yet, if you manage to hold on for a while, you might find C, which is valuable in its own right. But if you promised your funder to go from A to B, then finding C becomes much harder, and you aren’t likely to find B anyway.

Reviewers suffer from their own version of precision-madness. When ranking proposals, panellists are making conjectures: which of these projects, given enough time, will contribute most to society? But the path from initial funding to wider social impact is poorly understood, and can take 30 to 50 years to unfold. It’s ludicrous to think that you can specify, down to multiple spaces after the decimal point, the ideas that are most likely to succeed. This obsession with ranking means that we also demand excessive amounts of information from applicants, and waste a colossal amount of their time. In Australia, during a recent annual funding round for medical research, scientists spent the equivalent of 400 years writing applications that were eventually rejected.

Finally, ‘expert reviewers’ are not fungible commodities. One reviewer is not the same as another, and their judgements tend to be highly personal. Of the nearly 3,000 medical research proposals submitted for public funding in Australia in 2009, nearly half would have received the opposite decision if the review panel had been different, according to one notable study. As a result, the process isn’t just ineffective – it’s systematically biased. There’s evidence that women and minorities have lower chances of securing grants than people who are male or white, respectively.

Fortunately, there’s a simple solution to many of these problems. We should tell the experts to stop trying to pick the best research. Instead, they should focus on filtering out the worst ideas, and admit the rest to a lottery. That way, we can make do with shorter proposals, because the decision to accept or reject a ticket to a random draw requires less information – and highly specific proposals are unrealistic anyway. So instead of asking reviewers to make unreasonable predictions, they can turn their minds to weeding out cranks and frauds. Bias will still occur in the filtering stage, of course, but many more proposals will make it through to a lottery, which is inherently unbiased. The New Zealand Health Research Council is experimenting with such a programme, although with funding extended only to about four researchers per year, their sample size is too small to convince larger funders.

A lottery might sound like an extreme, baby-and-bathwater kind of solution. Not all scientific enquiry takes decades to play out, and sometimes there’s genuine agreement that a certain strand of research is important and timely. But perhaps we could keep a small proportion of the grant money for ideas where there’s a consensus among the expert panellists. Then we pluck out the bad ones and throw everything else into a pot. The trick with this triage would be to keep the bulk of the funds for the higher-risk, randomly selected proposals. My own view isn’t settled – I’ve run computer simulations for both scenarios, and while each one comes out looking better than the current system, the comparison between them is inconclusive. Other experts who study science funding, and accept the need for a lottery, still disagree about the best model (appropriately enough). More experiments are needed.

The late Sir James Black, the Nobel prizewinning inventor of propranolol, said that the peer review system was the enemy of scientific creativity, and that his own work would have been impossible without Ahlquist’s theory. Scientific thinking can often lead to progress, but the institutions of science can also create a major regress. Let’s face it: getting a grant is a lottery anyway. We should at least make it official, so the whole process can be cheaper, fairer and more efficient.