
Sentience is hot these days. Partly because of the development of impressive new AI systems, everyone seems to be asking: How do we know if something is sentient?
While consciousness means simply having a subjective point of view on the world — a feeling of what it’s like to be you — sentience is the capacity to have conscious experiences that are valenced, meaning they feel bad (pain) or good (pleasure). It matters for ethics, because a lot of people think that if an entity is sentient, it deserves to be in our moral circle: the imaginary boundary we draw around those we consider worthy of moral consideration.
While our moral circle has expanded over the centuries to include more people and more nonhuman animals, there are some edge cases we’re collectively unsure about. Should insects have moral rights? What about future AI systems that could potentially become sentient?
The philosopher Jeff Sebo is an expert on this; he literally wrote a book called The Moral Circle. And he argues that it’s helpful to investigate all potentially sentient beings — from bugs to future AIs — in broadly similar ways. So, after receiving a lot of reader questions on how we should consider both bugs and AIs, and responding to both in recent installments of my Your Mileage May Vary advice column, I reached out to him to talk about how we assess sentience, whether it’s hypocritical to worry about AI welfare while at the same time killing insects without a second thought, and why he developed a thought experiment called “the rebugnant conclusion.” Our conversation, edited for length and clarity, follows.
How can we go about assessing whether some creature — say, an insect — is sentient?
Our understanding of insect sentience is still limited, in part because we still lack a settled theory of sentience. But we can make progress through “the marker method.”
The basic idea [for this method] is that we can look for features in animals that correlate with feelings in humans. For example, behaviorally, we can ask: Do other animals nurse their wounds? Do they respond to analgesics like we do? And anatomically, we can ask: Do they have systems for detecting harmful stimuli and carrying that information to the brain?
This method is imperfect — the presence of these features is not proof of sentience, and the absence is not proof of non-sentience. But when we find many of these features together, it can count as evidence.
What do we find when we look for these features in insects? In at least some insects, there are systems for detecting harmful stimuli, pathways for carrying that information to the brain, regions in the brain for integrating information and flexible decision-making. For example, some insects become more sensitive after an injury, and they also weigh the avoidance of harm against the pursuit of other goals. Some insects also engage in play behaviors — you can find cute videos of bumblebees playing with wooden balls — suggesting that they may be able to experience positive states like joy. Again, none of this is proof of sentience. None of it establishes certainty. But it does count as evidence.
You’ve said that you think insects are about 20-40 percent likely to be sentient. How do you personally deal with bugs that come into your home?
For me, taking insect welfare seriously means reducing harm to insects where possible. If I find a lone insect in my apartment, I try to safely relocate them if possible. In cases where killing them is genuinely necessary, I at least try to reduce their possible suffering, for example by crushing rather than poisoning them. And, in cases where harmful methods like poisoning seem genuinely necessary, I take this as a sign that structural changes are needed, such as infrastructure changes that reduce human-insect conflict or humane insecticides that kill insects with less suffering.
Caring for individual insects is valuable not only because of how it affects the insects, but also because of how it affects us.
When I take a moment out of my day to help insects, it conditions me to see them as potential subjects, not mere objects. And if enough people take a moment out of their day to do this, it can contribute to a broader norm of seeing insects this way. This might lead not only to more care for individual insects but also more attention for insect welfare research and policy.
You’ve written that, hypothetically, we could end up determining that large animals like humans have greater capacity to suffer but that small animals like insects have more suffering in total, because there are just so many of them (1.4 billion insects for every person on Earth!).
Utilitarianism says we have a moral obligation to maximize aggregate welfare, which would imply that we should prioritize insect welfare over human welfare. But most of us would balk at that conclusion. Would you?
Here we need to distinguish what utilitarianism says in theory and what it says in practice. In theory, utilitarianism says that if a large number of insects experience more happiness in total than a small number of humans, then the welfare of the insects carries more weight, all else being equal.
This is related to what philosophers like Derek Parfit call “the repugnant conclusion.” They observe that if what matters is total welfare, then it may be better to create a large number of individuals whose lives are barely worth living than a small number of individuals whose lives are very much worth living, as long as it adds up to more happiness overall. I use the term “the rebugnant conclusion” to refer to this idea as it applies in the multi-species context.
In practice, though, utilitarian reasoning is more complex. Yes, we should promote welfare, but we should also respect rights, cultivate virtuous characters, cultivate caring relationships, uphold just political structures, and so on — since this kind of pluralistic thinking tends to do more good than trying to promote welfare by itself would do.
Utilitarianism also says that we should work within our limitations. We currently have greater knowledge, capacity, and political will for helping humans than for helping insects, and this shapes how much care we can sustain. I think this makes sense, and for me, the upshot is we should gradually increase care for insects while building the knowledge, capacity, and political will we need to do more.
To me, the “rebugnant conclusion” is a reductio ad absurdum that shows how utilitarianism falls short as a moral theory. I just don’t think we can expect humans to care more for insects than they do for themselves and other humans; it ignores the fact that we are biologically hardwired to ensure our own surviving and thriving, and that’s an inextricable part of our nature as human moral agents. I’d argue it makes more sense to reject utilitarianism than to ignore that. But it seems like you’d rather keep utilitarianism and just accept the rebugnant conclusion that comes from it — why?
I disagree that this is a reductio for utilitarianism, for at least a couple reasons. First, I think that this conclusion is more plausible than it might initially appear.
Think about our duties to other nations and future generations as an analogy. Their interests carry more weight than ours do, all else being equal. But we can still be warranted in prioritizing ourselves to an extent for a variety of relational and practical reasons, all things considered. The question is how to strike a balance between impartial and partial reasoning in everyday life. Here, I think that considering the welfare stakes for distant strangers can be a helpful corrective, since it can lead us to care for them more than we otherwise might, while still tending to relational and practical realities. My view is that we should approach our duties to other species in the same kind of way, and this seems like a plausible enough takeaway to me.
Second, every major ethical theory can seem implausible in at least some cases. Suppose that we share the world with a large number of insects and a small number of advanced AIs. Now, suppose that the insects have more welfare in total, the AIs have more on average, and humans fall somewhere in between. To the extent that welfare matters for decision-making, whose interests should take priority, all else equal?
If total welfare is what matters, we should say the insects. If average welfare is what matters, we should say the AIs. Either way, this implication will conflict with our default stance of human exceptionalism.
But part of the point of ethics is to correct for our biases, and this may be what we should do here. In retrospect, we should not have expected the interests of 8 billion members of one species to carry more weight than the interests of quintillions of members of millions of species combined.
When writing about the possibility of bug sentience, you’ve also written about the possibility of AI sentience. And you’ve said that future AI minds might have a lower chance of being sentient than biological minds, but “even if they do, the astronomically large size of a future artificial population could be more than enough to make up for that.” If we end up in a scenario with a gigantic population of AI minds, do you think we should prioritize their welfare over human welfare? Or is it unreasonable to demand that kind of impartiality from humans?
This is a great question. In my answer to the previous question, I considered a scenario where AIs have the most welfare on average but the least in total. But we can also imagine scenarios where AIs are so complex and so widespread that if they have a realistic possibility of being sentient at all, then they have the most welfare both on average and in total.
In that situation, insofar as welfare impacts are a factor in moral decision-making at all, as I think they clearly ought to be, a range of reasonable views might converge on the conclusion that the AIs merit priority, all else being equal.
Of course, as I emphasized in my previous answers, whether we should prioritize them, all things considered, in that scenario is a further question, and it depends on a lot of further relational and practical details. But we should at the very least extend them a great deal of care in that scenario, as we should for other animals.
With that said, a complication is that if we do eventually share the world with a large number of advanced AIs, which currently seems quite likely, then we may not be the only agents who determine what happens. After all, as AIs become more advanced and widespread, they may start to make decisions with us or even for us. In my view, it can help to consider how AIs should treat humans and other animals in these hypothetical future scenarios. And if we think that they should treat us with respect and compassion during their time in power, perhaps this is a sign that we should treat them with respect and compassion during our time in power — not only because how we treat AIs now might affect how they treat us later, but also because thinking about how we would feel in a position of vulnerability can help us better understand how we should behave in our current position of power.
What do you think is more likely to be sentient today: an ant or ChatGPT? I think it’s definitely the former, so it seems bizarre to me that some people spend a lot of time worrying about whether current AI systems may be sentient, while at the same time killing insects without a second thought or eating animals from factory farms. Why do you think this is happening — and is it hypocritical?
I agree that an ant is more likely to be sentient than ChatGPT today. But, I also think that near-future AIs will be more likely to be sentient than current ones. Companies are racing to build AIs with advanced perception, attention, memory, self-awareness, and decision-making. We have no way of knowing for sure if the companies will succeed, or if these capacities suffice for sentience. But, we also have no way of ruling it out at this stage, and even a realistic possibility warrants taking the issue seriously now.
At minimum, I think that means acknowledging AI welfare as a serious issue, assessing models for welfare-relevant features, and preparing policies for treating them with appropriate moral concern. Otherwise, we risk repeating the mistake we made with animals: scaling up industrial uses of them that will make it harder for us to treat them well when the evidence of sentience is stronger.
With that said, I agree that caring a lot about AI welfare while not caring at all about animal welfare can involve a kind of hypocrisy. There are real differences between animals and AI systems, but there are also real similarities. In both cases, we have to make decisions that affect nonhumans without knowing for sure what, if anything, it feels like to be them. I think it helps to assess these issues in broadly similar ways while acknowledging the differences.
Recent Comments