Listen to this essay
You notice an ant struggling in a puddle of water. Their legs thrash as they fight to stay afloat. You could walk past, or you could take a moment to tip a leaf or a twig into the puddle, giving them a chance to climb out. The choice may feel trivial. And yet this small encounter, which resembles the ‘drowning child’ case from Peter Singer’s essay ‘Famine, Affluence, and Morality’ (1972), raises big questions. Are ants sentient – able to experience pleasure and pain? Do they deserve moral concern? Should you take a moment out of your day to help one out?
Historically, people have had very different views about such questions. Exclusionary views – dominant in much of 20th-century Western science – err on the side of denying animals sentience and moral status. On this view, only mammals, birds and other animals with strong similarities to humans merit moral concern. Attributions of sentience and moral status require strong evidence. Human exceptionalist perspectives reinforced this view as well, holding that other animals were created for human use.
By contrast, inclusive views – particularly present in various Eastern and Indigenous cultures throughout history – err on the side of affirming sentience and moral status. Traditions like Jain philosophy teach reverence for all life, extending moral concern even to ants and bees. Poets like William Blake have drawn attention to the fragility of insect lives, suggesting kinship with humanity. On this view, when in doubt, we should protect rather than neglect, since ignoring the possibility of sentience risks leading to terrible mistakes.
Both views capture important insights. Inclusion risks misallocating scarce resources, whereas exclusion risks neglecting vulnerable beings. However, each view is also one-sided, addressing one risk but not the other. Is there a way to address both risks at the same time, especially when making decisions affecting large populations? After all, these questions are not limited to the occasional ant in a puddle. They extend to the quadrillions of invertebrates killed by humans each year. Soon, they may extend to AI systems too.
This is why we support a middle-ground approach that takes the best from both sides. It goes by different names, but we can call it a probabilistic approach here, since it combines higher or lower probabilities of sentience and moral status with proportionally stronger or weaker forms of protection. This is how we approach high-stakes decisions in other policy domains: by assessing the evidence, estimating the probability and severity of harm, and selecting a proportionate response. We can, and should, do the same here.
Clarity begins with recognising that at least three questions are in play. The first is scientific: are only mammals, birds and other vertebrates sentient, or can invertebrates, AI systems and other beings be sentient too?
The second is ethical: do only sentient beings deserve moral concern, or can (non-sentient) agents, living beings and other entities deserve moral concern too?
The third is practical: what kinds of policies are we able to achieve and sustain, taking into account our responsibilities, limitations, and other relevant factors?
We can assess how likely these beings are to matter – and how to factor this into the way we live our lives
These questions often get blurred. Queries like ‘Do individual ants deserve moral concern?’ risk conflating the scientific question of whether ants are sentient, the ethical question of whether only sentient beings deserve moral concern, and the practical question of whether a policy of caring for ants in a particular way is achievable or sustainable. Making sound decisions requires teasing apart these questions while seeing how they interact.
Fortunately, we have tools for achieving this goal. Scientifically, we can assess how likely particular beings are to possess capacities like sentience, by evaluating the available evidence. Ethically, we can assess how likely these capacities are to matter morally, by evaluating the available arguments. Practically, we can then put it all together to assess how likely these beings are to matter – and how to factor this into the way we live our lives.
We can see how the process works by approaching it step by step.
The first step is to estimate probabilities of sentience. Scientists now agree that many animals once dismissed as ‘mere machines’ display surprising complexity. Elephants mourn their dead, octopuses solve puzzles, and bees can learn to count. Moving forward, AI systems – actual machines – will exhibit increasingly sophisticated behaviours. The question is: how confident can we be that these behaviours are best explained by the capacity for subjective experience?
This question is challenging, since the only mind any of us can directly access is our own (and even then, only imperfectly), making it hard to know what, if anything, it feels like to be anyone else. This ‘problem of other minds’ becomes particularly stark when we consider octopuses with decentralised neural systems, AI systems with silicon-based architectures, and other beings that deviate substantially from the human paradigm.
However, questions about sentience are not beyond the reach of science. Even if we can never know for sure what, if anything, it feels like to be an octopus or a robot, we can still improve our understanding of the distribution of sentience by examining nonhumans for behavioural, computational or anatomical ‘markers’ of sentience – features that correspond to subjective feelings and emotions for members of our own species.
The stronger our evidence becomes, the higher our degree of confidence should be
For example, insects possess nociceptors and display complex behaviours like wound-tending, stimulus avoidance, and stress responses that suggest conscious experience. Similarly, while we might not be able to give much evidential weight to the language outputs and other behaviours of current AI systems, we can still examine their underlying computational architectures for features analogous to conscious processing in biological systems.
We may not be able to treat these markers as proof of sentience, but we can still treat them as evidence. And the stronger our evidence becomes, the higher our degree of confidence should be. Expressing this confidence in probabilistic terms, we can say that stronger evidence warrants assigning a higher probability of sentience, while weaker evidence warrants assigning a lower one. New evidence can also warrant updating these probabilities over time.
For both animals and AI systems, this is difficult work – which is part of why degrees of confidence, not all-or-nothing decisions, are the best way to capture the current state of knowledge. While scientists and philosophers might estimate probabilities about animal and AI sentience differently than, say, meteorologists estimate probabilities about the rain, the same basic commitment to modelling uncertainty is at play in both cases.
The next step is to estimate probabilities of moral status. Ethicists increasingly agree that sentient beings merit moral concern. The question is: how confident should we be that the best justification for this moral concern is sentience (the ability to experience pleasure and pain), as opposed to agency (the ability to set and pursue goals), relationality (the ability to participate in bonds of care or interdependence), or other such features?
This question is challenging as well. When we look back at past generations, we often conclude that they made tragic mistakes in both science and ethics. When future generations look back at us, will they reach a similar conclusion? It can be hard to know without the benefit of hindsight. Yet we must make decisions about moral boundaries using our best available evidence and reasoning, even while acknowledging our potential for error.
We can make progress in ethics in part by examining how general principles apply to specific cases, and then updating our views about both via the method of reflective equilibrium. Consider the principle that moral status requires rationality. Many humans lack this capacity, yet they still matter morally. Cats, dogs and other animals do too. This reveals a limitation of the rationality requirement, and many ethicists now reject this requirement partly for this reason.
Probabilistic ethics does not depend on the idea that morality is ‘objective’
Over time, this kind of ethical reflection and discussion can stress-test values and steer us toward more defensible positions. Yet, as in science, there is no guarantee that this process will be fast, easy or reliable. Bad values can persist for centuries until social, political and economic conditions allow them to be effectively challenged. Moral progress thus depends not only on rational enquiry but also on conditions that make open, good-faith enquiry possible.
While this process is underway, degrees of confidence can be useful. Suppose that you take sentience to be the most promising basis of moral status, followed by agency and relationality. You could capture that by assigning high probability to the sentience view, medium probability to the agency view, and a low but non-zero probability to the relationality view.
Importantly, probabilistic ethics does not depend on the idea that morality is ‘objective’. If morality is objective, probabilities can track our confidence in which values reflect the objective truth. If morality is subjective, probabilities can track our confidence in which values will survive our own reflection. In both cases, the basic logic is the same: we give more weight to values we find more plausible, yet we still leave room for doubt.
The final step is to combine these estimates together to inform decisions. Suppose that the best evidence and arguments support a 10 per cent chance that ants are sentient, and a 90 per cent chance that sentience suffices for moral status. This may be taken to yield a 9 per cent chance that ants have moral status. When we combine this estimate with similar estimates regarding agency, relationality and other such features, the probability could increase.
What should you do with this estimate? Clearly, it does not justify sacrificing your life for an ant. But if an ant is drowning in a puddle and saving them requires only a moment out of your day, then perhaps a 9 per cent chance that the ant is capable of suffering, and that suffering matters morally, is reason enough for you to make this modest sacrifice. After all, if the ant matters morally, then helping them out is good. If not, no big deal.
This case is a reminder that we can scale our interventions to reflect the probabilities and magnitudes of the benefits and harms at stake. For instance, in public health policy, we impose strict quarantine measures when disease spread is likely, and lighter social distancing guidelines when the risk is lower. Similarly, in environmental policy, we ban chemicals when carcinogenic effects are likely, while requiring only warning labels when the risk is lower.
Public health policy must balance the risk of disease outbreaks with the cost of social and economic disruption
We can take a similar approach to animal welfare policies, AI welfare policies, and other decisions concerning nonhumans. For instance, we can implement strong anti-cruelty measures for mammals, given that the probability of moral status is high. By contrast, we can implement weaker, but still forceful, anti-cruelty measures for insects, given that the probability of moral status is lower, yet still high enough to be worthy of consideration.
Of course, proportionate concern for welfare is never the only factor that matters in practical decision-making. For instance, public health and environmental policy must balance the risk of disease outbreaks and extreme weather events with the cost of social and economic disruption. Questions of practical feasibility, political legitimacy and indirect consequences also shape which policies we can realistically achieve and sustain in practice.
Moral and legal decisions about animals, AI systems and other beings will likewise involve a range of factors, including questions about welfare, rights, justice, feasibility, legitimacy and indirect effects. Instead of replacing such factors, a probabilistic framework for considering welfare risks can complement these other factors by ensuring that nonhuman welfare receives appropriate consideration, even when uncertainty remains.
Any proposal to assign probabilities to questions of moral status will attract scepticism. But while some sceptical concerns are reasonable, what they show is that this kind of probabilistic framework should proceed with care, not that it should be abandoned altogether. We can here consider two sceptical concerns that we take to be reasonable in this way: the problem of subjectivity, and the problem of imprecision.
Objection 1: The problem of subjectivity
A first concern is that assigning probabilities to sentience and moral status is simply too subjective to be useful. When we predict the weather, we can base probabilities on repeat observations: rain falls on a certain percentage of days with particular conditions. But when we estimate a 9 per cent chance that ants matter morally, nothing like that is happening. Thus, many people worry that these estimates are highly subjective and vulnerable to bias.
This worry is reasonable. Humans are more likely to attribute moral status to beings when they look and act like us, and when we use them as companions. That bodes well for cats, dogs and digital pets, but poorly for farmed fish, farmed insects and other digital systems – and the effects can be mixed for, say, pet insects or digital assistants. If probability estimates simply reinforce these biases, they could easily do more harm than good.
Regulatory frameworks can establish threshold probabilities that trigger different levels of protection
However, subjectivity is inevitable in any framework, including both probabilistic and all-or-nothing approaches. The difference is that probabilistic frameworks respond to this predicament by creating space for degrees of confidence and proportional interventions. All-or-nothing frameworks polarise discussion by forcing us to either express greater certainty than the evidence warrants or endorse more extreme policies than our uncertainty warrants. Probabilistic frameworks, by contrast, allow us to acknowledge our current uncertainty transparently and endorse balanced policies that reflect this level of confidence.
We also have tools for navigating different, conflicting risk assessments when making shared decisions about what to do. Democratic institutions can aggregate diverse probability estimates through majority rule, weighted voting or deliberative processes that seek consensus around compromise positions. Additionally, cost-benefit analysis can translate different confidence levels into quantifiable trade-offs, and regulatory frameworks can establish threshold probabilities that trigger different levels of protection, allowing society to make collective decisions even when individuals disagree about the underlying facts and values.
Objection 2: The problem of imprecision
A second concern is that probabilities suggest a level of precision we cannot achieve. Can we really say with a straight face that an elephant is 98 per cent likely to matter given the evidence, rather than, say, 97 per cent or 99 per cent? Although staunch Bayesians might insist on attempting to capture rational degrees of confidence with precise probability values, others might worry that precise numerical probabilities create an illusion of rigour where none exists.
Here again, we think this concern is legitimate. Questions about sentience and moral status are not like a coin toss, where randomness can be captured with exact numbers. These kinds of uncertainty are messier, and degrees of confidence are often rough guesses rather than calibrated measurements. When we use numbers to represent such messy phenomena, we risk creating an impression of exactness, when in reality our estimates are merely approximate.
The risk of false precision can still be present even when precision is a useful tool
However, rejecting probabilistic language altogether throws out a valuable tool. The solution is not to abandon probabilities but to aim for different levels of precision in different situations. In some cases, full precision might be neither possible nor desirable, and so imprecise estimates can be used instead. One option is to use qualitative language (‘high confidence’, ‘medium confidence’, ‘low confidence’), as we often do in public health and environmental reports. Another option is to use probability ranges (‘40-60 per cent’), which can make our estimates more concrete while still acknowledging that they lack full precision, especially at the margins.
In other cases, the aspiration toward precision can still be useful for communicating clearly, even if our estimates remain rough. The then US president Barack Obama in 2011 asked his advisers to assign probabilities to the claim that Osama bin Laden was in a compound in Pakistan, partly because qualitative terms like ‘likely’ or ‘unlikely’ could conceal real differences of judgment. Translating our estimates about sentience and moral status into precise probabilities can similarly be a useful exercise. In all cases, the key is to keep perspective – acknowledging that the risk of false precision can still be present even when precision is a useful tool.
If we take seriously the idea that beings with a non-negligible chance of moral status deserve proportionate care, then our practices and institutions must reflect this fact. This does not mean restructuring society around every ant and bee – there are real limitations to our knowledge, power and political will that should not be wished away.
Rather, it means letting our actions scale with probabilities: weak protections for beings with low chances of mattering, strong protections for those with high chances. The challenge is translating this principle into practical rules and policies that give appropriate consideration to beings with different chances of mattering, while still working within our limitations.
Invertebrate farming
Invertebrate farming has emerged as a frontier of industrial agriculture. Octopus farming is under development in countries such as Spain and Japan, and the possibility of high-density tanks, fish-based diets, and slaughter after a year raises concerns about both welfare and sustainability. Crustaceans such as shrimps, lobsters and crabs are farmed in the billions as well, typically under intensive conditions, for international seafood markets and restaurants. And insects such as crickets, mealworms and black soldier flies are now farmed in the trillions, typically in stacked bins under automated conditions to produce animal feed and human food.
There is a realistic possibility of consciousness and robust agency in near-future AI systems
In 2021, the philosopher Jonathan Birch and his team released a report assessing behavioural, neurobiological and physiological markers of sentience in cephalopod molluscs like octopuses and decapod crustaceans like lobsters. Following the report, the UK amended the Animal Welfare (Sentience) Act to recognise these animals as sentient beings, extending them legal protections in farming, research, and transport. In 2024, the New York Declaration on Animal Consciousness reinforced these findings, affirming the realistic possibility of sentience for cephalopod molluscs and decapod crustaceans, extending this to insects, and recommending that governments consider and mitigate welfare risks for these animals in policy decisions.
AI welfare
AI development has accelerated rapidly too, with AI already integrated into core areas of society. Language models such as chatbots and digital assistants are used by hundreds of millions of people for research, education, communication, and decision support. These models are trained on vast datasets, fine-tuned with human feedback, and deployed in a range of sectors, sometimes embedded in physical systems like robots and vehicles. Companies are now racing to produce more capable models and to expand integration into healthcare, finance, governance, and military applications. The near future is likely to involve billions of interconnected AI systems operating across diverse domains with increasing agency.
Is it possible that AI systems will one day become sentient or otherwise morally significant? In 2024, one of us co-authored a report showing that there is a realistic possibility of consciousness and robust agency in near-future AI systems, and recommending that companies and governments take AI welfare seriously now. Anthropic, the software company behind the AI assistant Claude, then hired one of the report authors as its first full-time AI welfare researcher, announced a model welfare programme, and included a welfare evaluation in the Claude 4 system card – not because Claude and other models are assumed to be sentient, but rather because current expert disagreement and uncertainty about this issue warrants at least modest resources to assess the issue and prepare a response.
So, should you take a moment out of your day to help the ant drowning in the puddle? We believe that you should. Since there is at least a non-negligible chance that the ant matters morally, you should devote at least minimal effort to helping them. This is the strength of a probabilistic approach: it can guide us not only in law and policy but in everyday life. Even if we might not be able to address the risks of insect farming or AI development by ourselves, we can still take small steps, like sparing the ant in the puddle. When multiplied across individuals, these steps can shape culture, institutions, and laws and policies. In this way, a probabilistic framework allows each of us to play our part in striking the right balance as a society.






