Does imposing the death penalty lower rates of violent crime? What economic policies will lead to broad prosperity? Which medical treatments should we allow and encourage to treat novel diseases? These questions have a few things in common. They bear important consequences for us all, and so policymakers and the public would like to know the answers – if good answers even exist. Fortunately, there are entire communities of experts who produce closely regulated scientific literatures dedicated to answering them. Unfortunately, they are also difficult questions, which require causal knowledge that’s not easy to come by.
The rise of social media means that experts willing to share their hard-won knowledge have never been more accessible to the public. So, one might think that communication between experts and decision-makers should be as good as, or better than, ever. But this is not the case. As anyone who has spent time on Twitter or watching cable news can attest, these outlets are also flooded with self-appointed ‘experts’ whose lack of actual expertise doesn’t stop them from sharing their views widely.
There is nothing new about ersatz experts, or even outright charlatans, and they aren’t limited to questions of policy. In every domain where decision-makers need the specialised knowledge of experts, those who don’t have the relevant knowledge – whether they realise it or not – will compete with actual experts for money and attention. Pundits want airtime, scholars want to draw attention to their work, and consultants want future business. Often, these experts are rightly confident in their claims. In the private market for expertise, the opposite can be more common. Daryl Morey, the general manager of the Houston Rockets basketball team, described his time as a consultant as largely about trying to feign complete certainty about uncertain things; a kind of theatre of expertise. In The Undoing Project (2016) by Michael Lewis, Morey elaborates by describing a job interview with the management consultancy McKinsey, where he was chided for admitting uncertainty. ‘I said it was because I wasn’t certain. And they said, “We’re billing clients 500 grand a year, so you have to be sure of what you are saying.”’
With genuine expertise at a premium, the presence of experts who overstate their conclusions adds noise to the information environment, making it harder for decision-makers to know what to do. The challenge is to filter the signal from the noise.
When considering important questions in challenging domains such as economic forecasting and public health crises, there are many times that experts don’t have the answers. Less often, they admit it.
Must we accept that any expert assessment could be hot air or, at best, a competent expert stretching beyond his or her competence? Or can we do better?
To better understand the problem of communicating scientific knowledge to the policymakers and the public, it helps to divide the difficulty of questions into three levels. Level-one questions are those that anyone with even modest expertise or access to a search engine can answer. Some political economy questions in this category include, for example, ‘Will price controls cause shortages?’ or ‘Are incumbent governments likely to do better in elections when the economy is performing well?’
Level-two questions are those where only the most qualified experts have something to say. Some political and economic questions that we believe fall into this category are ‘Can we design algorithms to assign medical residents to programmes in an effective way?’ (yes) and ‘Do term limits improve governing performance?’ (no). These are questions for which substantial peer-reviewed scientific literature provides answers, and they can be addressed by what the American philosopher Thomas Kuhn in 1962 called ‘normal science’: that is, within existing paradigms of scholarly knowledge.
Level-three questions are those where even the best experts don’t know the answers, such as whether the death penalty lowers violent crime, or what interest rates will be in two years. Such questions are either not answerable given current research paradigms, or just more fundamentally unanswerable. Much of the scientific enterprise itself consists of distinguishing between when further research or information will make questions answerable or not. Importantly, for the purposes of policymaking, it doesn’t necessarily matter why we can’t know the answer. So, for communicating about science with the public, the distinction between level two (questions that require true knowledge) and level three (ones that truly aren’t, at least at present, answerable) is most important.
Politicians and executives are rarely experts in the domains in which they make decisions
If you’re unsure whether our classification of these questions into level two vs level three is correct, in a sense that’s the point. Knowing which questions fall into which category requires expertise. (Which, to be clear, we ourselves lack for some of the questions referenced, but we consulted with recent reviews of the literature from top experts.) In fact, the experts themselves might sometimes get this distinction wrong. Sovietologists thought that ‘Is the USSR a stable country with minimal risk of collapsing?’ was a well-answered question (incorrectly believing ‘yes’), and many experts thought that there was no way such a divisive political outsider as Donald Trump could win the Republican Party nomination, let alone the presidency (incorrectly believing ‘no’).
Still, experts are certainly more likely to know which questions are answerable than the relevant decision-maker. Politicians and executives might be experts in the domain of decision-making, but they are rarely experts in the domains in which they make decisions.
We need not concern ourselves much with the level-one questions. Of course, some people might be too lazy to Google, and can mouth off about easily settled science. We don’t mean to dismiss the potential danger of experts (or politicians) themselves making obviously false claims, but this danger shouldn’t pose a consistent problem to a decision-maker with an honest interest in the truth.
Things become tricky when distinguishing between the second and third levels, the questions that can be answered now and those that cannot. The key difference between these kinds of questions is ‘Would a competent expert well-versed in the relevant scientific literature be reasonably confident in the answer?’ Note that the question is about both the competence of the expert and the answerability of the question.
This means that, when making decisions that require expert perspective, it might be a mark of a true expert to admit that he or she doesn’t know, at least not yet. And, if we aren’t sure what questions are answerable, we are vulnerable to uninformed experts convincing us they have the answers. Even worse, good experts, when posed an unanswerable question, might do the same. From the expert perspective, they know that admitting uncertainty can harm their reputation, because bad experts are more likely to be uninformed than good ones. More concretely, saying ‘I don’t know’ makes for bad punditry, and unenviable terrain for ambitious analysts or consultants hoping to justify their hourly rate.
When experts and pundits can’t or won’t say ‘I don’t know’, the consequences can be dire. In the short term, bad advice leads to bad decisions. In the context of admitting uncertainty about challenging questions, there are two ways this can happen. These are particularly clear and salient in the context of the COVID-19 pandemic.
First, when faced with a level-two problem, the advice of qualified experts can get lost in the noise, or decision-makers might just ask the wrong experts. Among a general audience, it took a long time for many to get the message that increased handwashing would save lives, that social distancing was necessary, and that large gatherings should be cancelled.
Probably more common for major policy choices, decision-makers can be persuaded to take risky actions or find justifications for actions they would take no matter what, based on false confidence projected by experts. This can happen when other, more qualified experts are giving the advice. For example, a pair of articles in mid- to late-March 2020 by the American legal scholar Richard Epstein that downplayed the threat from COVID-19 were said to have been influential among some in the Trump administration. To put it gently, Epstein’s arguments were not sound.
If we don’t know what questions are unanswered, we won’t know where to best direct our efforts
Bad policy can also come about when the research frontier doesn’t offer definitive answers, or gives the wrong answers. The history of medicine is littered with examples of treatments that we now know did more harm than good, such as bloodletting, tobacco and opium. All of these techniques had ‘evidence’ that they were in fact healthy, from vague theories about ‘humours’ (bloodletting) to real evidence that they reduced pain, but without sufficient consideration of side effects (opium). Many lives would have been saved if doctors realised that they didn’t know whether these treatments worked well enough to outweigh the side-effects, and were able to admit this.
While it’s hard to turn away from the short-term costs in a time of crisis, there are important long-term consequences if we fail to properly consider and address uncertainty. Grappling with uncertainty is central to the scientific enterprise, and there ought to be a place for acknowledging that. If we don’t know what important questions are unanswered, we won’t know where to best direct our efforts. Having a false confidence in understanding important questions will delay the discovery of actual improvements. As the American theoretical physicist Richard Feynman put it in a lecture in 1963:
It is in the admission of ignorance and the admission of uncertainty that there is a hope for the continuous motion of human beings in some direction that doesn’t get confined, permanently blocked, as it has so many times before in various periods in the history of man.
Too much trust in the false or exaggerated precision of experts discourages investing in resources for methodical attempts to tackle hard questions. On the other side, there is at least as much risk in discounting expert advice altogether.
Understanding the market for experts, and when experts are more or less willing to admit uncertainty, is a challenge worthy of our time. To get a bit of a grasp on these questions, we developed a simple mathematical model (published in the American Political Science Review) to study when experts are willing to admit uncertainty. As models do, it makes some simplifying assumptions and abstracts away from important features of reality. For example, a simulation of disease contagion by the graphics reporter Harry Stevens in The Washington Post on 14 March 2020 treated people as balls floating around, bouncing off each other, and quarantine as a physical wall separating them. On the plus side, this graphic simplicity clearly illustrated how quarantines and other policies such as social distancing can ‘flatten the curve’ of infections. Of course, people’s lives are more complex than to be truly represented by bouncing balls but so many people found this model insightful that it became the most viewed article in the history of the newspaper’s website.
Models are meant to simplify (and we simplify further here; see the paper for more detail); in our model of the market, for experts there is a single expert (‘he’) and a single decision-maker (‘she’). To build on our opening example, suppose the policy in question is whether or not to allow the use of a new drug to treat a disease. We consider a ‘one-shot’ interaction where the decision-maker is faced with this choice, and asks a medical expert for advice. In the terminology introduced above, the question is either a level-two or a level-three question, but the decision-maker is not sure which. That is, there might or might not be solid evidence about whether the drug will work.
The expert might be competent, in which case he will know if the evidence collected so far indicates that the drug is safe and effective (a level-two question, but not if there isn’t good-enough evidence to confidently state whether the drug is safe and effective, which is a level-three question). If the expert is incompetent, or the question is outside of the domain of his expertise, he won’t have useful advice to give, regardless of what the medical literature says. (Of course, ‘incompetent expert’ might seem like an oxymoron; think of this as a generally qualified person asked a question outside of the domain of his real expertise.)
If the drug is effective, the policymaker will want to allow use, and if the drug is not effective she will want to ban use. If the evidence is not yet strong in either direction, let’s suppose for simplicity that the optimal choice for the policymaker is to pick an ‘intermediate’ policy, perhaps allowing use for severe cases or allowing limited pilot studies. A key (and realistic) implication is that the policymaker will be able to make the best possible choice given the available evidence only if the expert communicates honestly, including in the cases where he is uncertain. Of course, the policymaker certainly wants to know when the drug is effective or not. And when the evidence is weak, she also wants to know this, as taking the more decisive action is worse than the hedging choice. Optimal decision-making requires experts to admit it when they don’t know the answers.
Alas, the punishment for guessing wrong never seems to be that high
But will the expert ever say ‘I don’t know’? If he cares only about good policy being made, yes. However, the way in which experts care about their reputation for competence can create problems. Even though some good experts will be uncertain about the truth when faced with a level-three problem, incompetent experts will be uncertain no matter what. Policymakers probably do not know how hard the problem is. As a result, competence and knowledge are correlated, and so admitting uncertainty will make an expert look less competent even if he is qualified but facing a question genuinely at or beyond the borders of the knowledge in the field. Experts who care about their reputation more than the truth have an incentive to ‘guess’ when and if the drug is safe or not.
When the expert is uninformed, the policymaker can end up allowing or banning a drug when the opposite choice would be better. Also, since she knows the expert will sometimes guess, she can never be absolutely certain if the drug is safe or not.
What can we do about this problem? A seemingly obvious solution would be to check whether expert claims are correct. If experts who make strong claims that are then refuted are chastised and not hired in the future, this might deter them from overclaiming. Alas, the punishment for guessing wrong never seems to be that high: the architects of the Iraq War and those responsible for decisions that led to the 2008 financial crises are generally doing quite well, professionally. And our model gives a theoretical justification for why. Just as some experts who don’t have a good answer to questions really are competent but face an unanswerable question, those who guess and end up being incorrect might be competent too. In fact, if all uninformed experts guess at the truth, then guessing and being wrong is no worse than just admitting uncertainty outright. So the uninformed might as well roll the dice and guess: if they are right, they look competent; if not, they look no worse than if they had been honest about their uncertainty.
This state of affairs is particularly frustrating for the competent experts faced with a tough question: precisely because they are competent, they know that they are faced with a tough question, and are no more likely to guess at the correct policy than a charlatan. Where they do always know more than the charlatan is in realising whether they are faced with an unanswerable question. True expertise requires knowing the limits of one’s knowledge.
Recognising where real experts have a definitive advantage also suggests how institutions can be designed to encourage them to admit uncertainty: rather than validating whether their predictions are correct, the key is to verify whether the question was answerable in the first place. This way, good experts will be willing to say: ‘I don’t know because this is an unanswerable question,’ confident that the latter part will be validated. And once the competent experts are saying ‘I don’t know,’ incompetent experts might do so as well, if guessing that the problem is impossible is more likely to work out for them than guessing any particular solution.
The core lesson of the model is that, while the fact-checking of experts is useful for some purposes, it isn’t effective for getting experts to admit uncertainty. On the other hand, ‘difficulty validation’, or finding a way to check whether the question was answerable in the first place, can motivate good experts to say ‘I don’t know’ – and sometimes bad experts, too.
Moreover, we think that there are real-world institutions that are already taking this approach.
For example, scholarly publishing relies on peer review, where other experts read drafts of papers and provide critical input about whether the findings are credible and interesting enough for publication. Importantly, peer reviewers are not typically checking whether the claims in a paper are correct, but whether the authors have come up with a method to render the paper’s question answerable.
Some practical ideas for how to improve expert communication in other settings follow.
First, it is not only useful to ask different experts, but to ask different experts different things (and little differences in the questions asked can make big differences in the answer). Rather than asking experts: ‘Will the drug work?’, ask some of them: ‘Is there good evidence about whether the drug will work?’ Qualified experts won’t always know the answer to the first question, but they will always know the answer to the second.
Even real experts might have an incentive to bluff when posed with unanswerable questions
Second, don’t just broadcast the most extreme and confident views. The most confident out there might be the most informed, or the most susceptible to the Dunning-Kruger effect: not knowledgeable enough to realise they shouldn’t be confident. In our experience, when grappling with really important and challenging questions, the latter might be more common.
Third, listen to conversations among experts. Since experts know that hammed-up claims won’t fly with their peers, they might be more honest about their level of confidence in this context than when speaking on TV. This might be the real benefit of social media: conversations among experts no longer just happen at lab meetings or conferences. They often occur in the open where anyone can hear them.
In one of our favourite bits on the Jimmy Kimmel TV show – ‘Lie Witness News’ – interviewers troll the streets of New York asking impossible questions such as ‘Is it time to bring US troops home from Wakanda?’ Interviewees inevitably rise to the challenge, answering confidently and in (imaginary) detail. Our work suggests that, in the presence of reputational incentives, the market for expert advice might not be much better and that, still worse, even real experts might have an incentive to bluff when posed with unanswerable questions.
So, how do we foster trust and integrity in discourse on science? A small but real part of the problem is that reputational incentives to appear qualified and knowledgeable drive experts to overstate their certainty. One way to counter this tendency is to ask better questions, and that usually means questions about the nature of the evidence and what it allows. We can also change the way that we relate to experts, not just listening to the loudest and most confident voices, but to those with a track record of only claiming as far as the evidence will take them, and a willingness to say ‘I don’t know.’