/
/
Aeon
Digital illustration of a human skeleton scan on a futuristic medical interface, showing data and analysis metrics.

Are doctors replaceable?

Medical error kills hundreds of thousands yearly. If AI is sophisticated enough to help, doctors must not stand in the way

by Charlotte Blease 

Listen to this essay

If planes fell from the sky with the regularity of deaths due to medical error, there would be outrage, inquiries and sweeping reform. When doctors make mistakes, however, the narrative is gentler: they are only human. To some extent, this is an entirely justified response. On the other hand, that is the problem. What is striking, though, is not only the scale of this tragedy but our indifference to it.

Patients are the visible victims of medicine’s hidden ailments, but doctors are its second casualties. Behind the white coats, many physicians are exhausted, depressed and burning out. Around half of doctors in the United States report burnout. In the United Kingdom, 40 per cent say they struggle to provide adequate care at least once a week, and a third feel unable to cope with their workload.

Meanwhile, patient demand is surging. Populations are growing, ageing, and living longer with chronic illnesses like cancer, diabetes and dementia. By 2030, the world will face an estimated shortage of around 10 million health workers. In parts of Europe, millions already lack a general practitioner (primary care physician). Shortages and stress form the perfect conditions for error. Burnout and fatigue are linked to mistakes in diagnosis, treatment and prescribing.

However, even in the most resourced health systems, staffed by the most dedicated clinicians, these problems will not entirely go away. Exhaustion and overwork exacerbate mistakes, but the deeper truth is that human beings are limited creatures. We forget, misjudge, and grow overconfident; our moods, biases and blind spots shape what we see and what we judge to be the case. Burnout makes these weaknesses worse, but it does not create them. They are baked into the very psychology that once served us well in small ancestral groups, yet falters in the high-stakes, information-saturated, multitasking environment that is modern medicine. In other words, even at their best, doctors are human – and that means errors are inevitable.

My family has always been on intimate terms with medical error. My brother lived with myotonic dystrophy for two decades before anyone gave it a name. My twin sister, by luck, was diagnosed sooner by a visiting locum. Before that, she was handed a grab bag of incorrect labels by her doctors: depressed, tired like everyone else, or simply suffering from ‘wear and tear’. It seemed she was offered anything but the truth, including candour from the physicians who didn’t know. Luck, in medicine, can also be oddly cruel. My late partner’s stomach cancer was discovered only after years of missed signals about his congenital heart condition. By the time doctors recognised the heart problem, they discovered his cancer had already taken root.

For me, these are not abstract stories about system failure – they are family history. But they are also part of a wider, more astonishing reality: medical error is among the leading causes of death worldwide. In the US, it is estimated that around 800,000 people die or become permanently disabled each year from diagnostic error alone.

At this point, many argue that the solution lies with technology. If errors are inevitable in human hands, perhaps machines can steady them, or even replace them altogether. Enter Dr Bot. Depending on who you ask, the machine is either a saviour or a saboteur. Most commonly, the vision is one of man and machine working side by side: the algorithm whispering in the doctor’s ear, the human hand guiding the treatment. A doctorly duet, not a duel.

Doctors are enmeshed in the very system under scrutiny. Of course they want to believe they’re irreplaceable

If the purpose of medicine is patient care, then the real question is not who holds the stethoscope, but who – or what – can best deliver safe, reliable and equitable outcomes.

But you will not find in this essay a roll call of AI’s latest feats or a tally of its diagnostic wins and losses. Instead, I want to examine a prior assumption: that doctors themselves must be the arbiters of whether technology can replace them, or even that doctors should be central to the conversation at all. Instead, in the spirit of philosophical enquiry, with as big a question as who or what should deliver patient care, we need to demonstrate parity and fairness. We rightly scrutinise Big Tech and suspect its motives and methods, but medicine is no less conflicted. To presume that doctors should arbitrate their own indispensability is to let the most interested party preside as judge and jury.

In this essay, then, I turn the lens not on AI itself, but on the presupposition that physicians ought to be the ones deciding whether Dr Bot can – or should – take their place. It is such a common assumption that it tends to move camouflaged within our conversations about the future of clinics.

Doctors are enmeshed in the very system under scrutiny. Their status, salaries and sense of self are bound up in the debate. Of course they want to believe they’re irreplaceable. But history shows that those most invested in their own survival are rarely the best judges of their own irreplaceability. If we are to think clearly about whether Dr Bot could replace, or even work alongside, human doctors, we must step outside the consulting room and confront the question on its own terms – with as few allegiances as possible.

An outsider’s vantage point, in other words, is essential. Independent observers can notice what insiders either miss or quietly refuse to acknowledge. That means drawing on a range of perspectives: philosophy, sociology, psychology, and patients themselves – all of whom may be better placed to ask what medicine is for, how well it works, and who or what can serve its purpose best.

In that spirit, I want to open a conversation about the past and future of medicine.

Take prognosis and treatment, where a psychological spotlight on doctors’ performance is revealing. Doctors must act with conviction; hesitation can cost lives. Here, unlike philosophers who can hedge with ‘on the one hand’ and ‘on the other’, clinicians are forced into swift, high-stakes decisions. Confidence, even overconfidence, is baked into the role. The problem is that confidence is no guarantee of accuracy. One study of intensive-care patients found that doctors who were ‘completely certain’ of their diagnosis were wrong as much as 40 per cent of the time. Worse still, as experience and familiarity with making clinical judgments increase, doctors tend to consult colleagues less and seek fewer second opinions. Authority can curdle into overconfidence – which can seem like arrogance.

Patients, paradoxically, collude in this. We prefer doctors who project confidence, even if misplaced. The white coat is still treated as a symbol of authority; we are reassured by decisiveness, even when it is wrong. But conviction, as decades of studies show, is an unreliable proxy for accuracy. As one pathologist put it, physicians are ‘walking around in a fog of misplaced optimism’.

For centuries, we have lived under the myth of the irreplaceable doctor. Physicians are not only healers but cultural icons: secular priests of the body, guardians of mortality, and interpreters of suffering. We look to them not simply for treatment but for reassurance, ritual, even a touch of transcendence. Yet this mythology clouds judgment. When we insist that ‘only a human’ can offer care, what we often mean is that we cannot imagine a different arrangement. But history is full of occupations and roles once thought untouchable – clerics, navigators, even bank tellers – that were eventually dethroned in different domains and guises.

Surgeons once opposed anaesthesia because they feared it would erode their hard-won skill in operating quickly

Then there is symptom denial – not among patients, but in the profession itself. The French Enlightenment philosopher Voltaire mocked his character Dr Pangloss, tutor to the young Candide, who insisted that ‘all is for the best’ in the ‘best of all possible worlds’, no matter how dire reality appeared. Medicine has often adopted its own Panglossian stance, downplaying or deflecting recognition of its failures. Diagnostic error, for example, was largely ignored for most of medical history. When the Institute of Medicine published its landmark report To Err Is Human (1999), the index of the 270-page document included only two references to diagnostic error. When patient safety researchers such as David Newman-Toker and Peter Pronovost began highlighting misdiagnosis as a crisis in the 2000s, they encountered institutional indifference and professional resistance.

Doctors’ instinct to minimise their errors is understandable but revealing. Studies show that, when confronted with data on mistakes, physicians are more likely than patients to dismiss the numbers as exaggerated, or to suggest that errors happen to ‘other doctors’. Surgeons, for example, consistently underestimate their own complication rates. What looks like denial is, in fact, a protective shield for professional identity and perhaps for the capacity to continue to practise. Perhaps too much humility might even be crushing when it comes to keeping going in medicine. Doctors often tell me their mistakes haunt them. The harder truth is that many errors pass unnoticed, and unacknowledged.

History shows that this defensiveness also extends to innovation. Medicine has repeatedly resisted insights that challenge existing theories and practices. Anaesthesia, antiseptics, vaccines – even handwashing was initially met with disdain. Surgeons once opposed anaesthesia because they feared it would erode their hard-won skill in operating quickly – even though patients writhed in agony. As the historian David Wootton wrote in the book Bad Medicine: Doctors Doing Harm Since Hippocrates (2007), medicine’s reluctance to engage with novel advances has often slowed progress. More recently, doctors resisted basic digital tools such as online portals for patients to access their own records. As late as 2021, most US providers were still using fax machines to share clinical information. In the UK, the National Health Service still spends millions each year on stamps and paper.

Conservatism in medicine is not always a vice. The philosopher Thomas Kuhn, in his book The Structure of Scientific Revolutions (1962), argued that scientific communities need to defend their paradigms until the evidence for change is overwhelming. Otherwise, every fad would destabilise the field. But medicine’s caution often goes beyond prudence. Change is resisted not only because of the burden of work – a challenge that it would be unfair not to fully acknowledge – but also because it is easier and sometimes protects professional interests.

Consider how fiercely medicine guards its monopoly. In the US, nurse practitioners and physician associates are legally capable of handling up to 90 per cent of what primary care doctors currently do. Studies show patients cared for by nurse practitioners often report equal or greater satisfaction. Yet physician groups consistently lobby to limit their autonomy. The American Medical Association spends tens of millions each year to preserve physicians’ dominance, outspending many Silicon Valley giants. In the UK, the British Medical Association continues to campaign against expanding the role of physician associates, warning that they threaten the ‘unique role’ of the doctor. Meanwhile, millions of patients go without any kind of timely care.

This guild mentality also shows up in attitudes toward transparency. Patients now have legal rights to access their medical records, but when authorities in the US and the UK tried to make online access routine, professional bodies resisted. Doctors warned of patient anxiety, confusion or wasted appointments – objections that largely failed to materialise. What online access did reveal, however, was something more awkward: one in five patients reported finding errors in their records, some of them serious. No wonder access was so fiercely opposed.

Of course, none of this means doctors are villains. Even pushback with records is understandable from doctors worried that they would be inundated with enquiries and extra work. It is worth remembering that the vast majority are dedicated, brilliant and deeply humane. But, as a profession, it is also crucial to challenge self-interested practices.

Part of medicine’s influence rests on what the scholars Richard and Daniel Susskind have called the ‘grand bargain’ of the professions. In their book The Future of the Professions: How Technology Will Transform the Work of Human Experts (2022), they argue that society grants white-collar workers prestige, status and generous remuneration in exchange for the promise of expertise and ethical conduct. Doctors enjoy a monopoly on diagnosis and treatment, controlling entry into the profession through licensing and regulation. In return, the public trusts them to act in patients’ best interests. Indeed, doctors are among the world’s most trusted professionals – well above journalists, politicians and religious clerics. But the bargain is not always honoured. The privileges are obvious – high salaries, cultural prestige and political clout. The obligations, less so. When professional bodies resist patient access to records, block the autonomy of nurse practitioners, or deny the scale of diagnostic error, they are protecting the guild rather than the public.

None of this is to deny that medicine is often meaningful work, or that many doctors see their role as a calling. But career satisfaction, prestige or pay are not arguments for preserving a profession. Most of us do not enjoy our work. Therefore, the privileges of physicians, or special pleading based on meaningfulness, is an argument that must be independently investigated. Again, we must underline in bold, red ink: the central question must be whether the current arrangement delivers reliable, accessible care – and, if it does not, whether new models, including AI, might do better.

If Dr Bot is to have a role, it will not be as an imitation priest in a white coat

Before we even get to the evidence on whether Dr Bot or human doctors perform better – or whether some hybrid arrangement might work best – many doctors bristle. As a health informaticist (an empirical researcher specialising in the data on healthcare), I have surveyed doctors on this question for years, across multiple countries, and can attest that their defence is almost always the same. AI lacks what they call judgment. It has no intuition, no hunches, no instinct or presentiment, no feel for the patient. This perspective was encapsulated by the anaesthetist Ronald Dworkin who wrote:

Because AI lacks intuition, suspicion, instinct, presentiment and feeling, it lacks judgment in the human sense. It can only work with abstractions – that is, with words. It can never get behind the words. It can never get deep inside matters.

Richard Susskind has steadfastly argued that we tend to elevate human judgment as if it were intrinsically valuable, rather than asking a more basic question: what are the problems to which human judgment is the solution? If accurate diagnosis is the problem, it is unclear whether human judgment – namely that of doctors – must be the only solution. Here, Susskind argues, we must carefully distinguish between processes and outcomes. In medicine, the profession often preserves its processes – the rituals of consultation, the authority of the bedside manner, the art of medicine – rather than focusing on outcomes.

But this is a classic case of process-thinking. The argument preserves the mystique of how decisions are made, rather than asking the only question that matters: what outcomes do those decisions produce for patients? A person arriving in the emergency department with crushing chest pain does not care whether the diagnosis comes from human intuition or from an algorithm; they care only that it is correct, delivered quickly, and followed by the right treatment. As Susskind puts it in his most recent book How to Think About AI: A Guide for the Perplexed (2025): ‘People who seek expert help do not generally approach their professional advisers saying, “Good morning, I would like some judgment please.” Judgment isn’t the end in itself.’ As Susskind urges, people want peace of mind, not necessarily therapists; they want health and access to accurate information, not necessarily medical appointments.

Of course, the picture is a little messier. Many of us have grown attached to familiar processes – the ritual of the waiting room, the authority of the white coat, the reassuring cadence of a consultation. But some of this is mere habit, not preference: we accept these rituals because they are what we have always known. Patients may enjoy the familiarity, but in the end – when it is a matter of diagnoses delayed or missed – they care less about whether wisdom is dispensed through a kindly doctor or a computer interface, than about whether they receive an accurate diagnosis, effective treatment and humane care.

What matters is not sentiment but whether care can be made more accurate, more timely and more humane

These questions, then, are not about abstract principles or clever thought experiments only. They cut through the texture of real lives – my family’s included – and they expose the fault lines of medicine itself. Doctors are fallible not simply because they are tired, or overworked, or badly resourced, but because they are human: bound by psychology, shaped by habits, and protected by institutions that defend their own interests. Technology will not save us if it simply reproduces medicine’s old flaws in digital form. Every innovation arrives with its own complications, even as it solves others. That is no reason to shut down progress, or to rig the terms of debate in advance. AI carries serious ethical and political risks – from deepening inequalities to new forms of harm, from lost jobs to environmental costs. These concerns deserve scrutiny. But what will not help is straw-manning the technology, or deferring endlessly to the very profession whose survival is at stake. Doctors cannot be the only ones asked to judge their own replaceability.

And so, to the present question: the very possibility of a Dr Bot. What matters is not sentiment but whether care can be made more accurate, more timely and more humane. I know what turns on that question: years lost to misdiagnosis, treatments delayed, conditions overlooked until it was too late. It is natural that doctors defend their intuition, hard-won through gruelling training. But that defence is made from rarefied air, far above the lived reality of the patients medicine fails. My siblings endured years of self-doubt while wrong diagnoses passed as routine. Such omissions, biases and errors are not abstractions, as physicians may wish them to be. They are real harms, far too often unseen.

If Dr Bot is to have a role, it will not be as an imitation priest in a white coat, but as part of a wider reckoning with what medicine is for and who it should serve. The point is not to preserve a profession but to reimagine a practice.

In that light, the really pressing question is not whether AI can replace doctors. The evidence for or against that will emerge soon, as it should. What matters first is whether we are willing to release ourselves from the myths that keep medicine tethered to its own limitations. Only then can we begin to imagine and properly question whether new systems and processes could serve patients better than the ones we have now.