NEWS

Shannon Vallor says AI does present an existential risk — but not the one you think

by | Nov 21, 2024

Illustrated portrait of Shannon Vallor wearing a black turtleneck, yellow blazer and round glasses.

You may have heard the idea that AI is a “stochastic parrot,” mechanistically repeating our words back to us without actually understanding them. But Shannon Vallor, a philosopher of technology at the University of Edinburgh, thinks there’s a better metaphor: AI, she says, is a mirror.

After all, a parrot is another mind — not exactly like ours, but a sentient mind nevertheless. A large language model like ChatGPT is not. It reflects back to us our own images, words, and whatever else we’ve put into its training data. When we engage with it, we’re a lot like Narcissus, the mythical boy who sees his beautiful reflection in the water and becomes transfixed by it, thinking it’s another person. 

In her new book The AI Mirror, Vallor argues that it’s our tendency to misperceive AI as a mind — and to think it may even have the capacity to be more moral than us because it’s more “objective” and “rational” — that poses a real existential risk to humanity, not anything the AI itself might do.  

I talked to Vallor about what exactly she thinks that risk is, what she believes it’s doing to human agency, and how it feeds into transhumanism, the movement that says humans should proactively use technology to augment and evolve our species. Here’s a transcript of our conversation, edited for length and clarity. 

You’re not the sort of person who’s kept awake at night by fears that AI will become a conscious mind that maliciously decides to enslave us all. But you do argue that there is a real existential risk posed by AI. What is it? 

The risk I talk about is existential in the philosophical sense of really striking at the core of human beings and our ability to give meaning to our existence. One of the fundamental challenges of being a human being, but also the thing that we often treasure, is that we are not locked into a set of reflexes or mindless responses — that we can, in fact, use cognition to break away from our habits and the social scripts we’re accustomed to, and choose to move in new directions. We can choose new moral patterns for society, new political structures or norms. 

At every point in human history we see moments where individuals or communities chose to alter the pattern. But that requires confidence in ourselves, in one another, in the power of human agency to do this. And also a sort of moral claim on our right to have that power. 

One thing I hear in every country that I travel to to speak about AI is: Are humans really any different from AI? Aren’t we at the end of the day just predictive text machines? Are we ever doing anything other than pattern matching and pattern generation? 

That rhetorical strategy is actually what scares me. It’s not the machines themselves. It’s the rhetoric of AI today that is about gaslighting humans into surrendering their own power and their own confidence in their agency and freedom. That’s the existential threat, because that’s what will enable humans to feel like we can just take our hands off the wheel and let AI drive.

And in some quarters the rhetoric is not only that we can, but that we should let AI do the hard thinking and make the big decisions, because AI is supposedly more rational, more objective.

Exactly — and that you’re somehow failing to be efficient, that you’re failing to cooperate with progress, that you’re failing to enable innovation, if you don’t go along with this. 

When you write about this loss of confidence in human agency, you draw on the existentialists, who argue that there’s no intrinsic meaning in life — it’s something humans need to choose how to create. You especially draw on José Ortega y Gasset, a Spanish philosopher of the early 20th century, and his notion of “autofabrication.” Why is that a key idea for you in the context of AI?

Ortega thought about the core problem of human meaning, which is that we have to make it ourselves. And that’s what he meant by autofabrication, which literally just means self-making. He said that this is the fundamental human condition: to make ourselves over and over again. The job never stops, because our cognitive equipment has the ability to take us into a realm of self-awareness such that we can see what we’re doing and actually decide to change it. 

That freedom is also, from an existential standpoint, kind of a burden, right? Autofabrication is something that takes a fair amount of courage and strength, because the easier thing is to let someone else tell you that the script is final and you can’t change it, so you might as well just follow it, and then you don’t have to burden yourself with the responsibility of deciding what the future looks like for yourself or anyone else.

So what that rhetoric around AI is telling us is to surrender our human freedom, and to me that’s such a profound violation of what is good about being human. The idea that we should give that up would mean giving up the possibility of artistic growth, of political growth, of moral growth — and I don’t think we should do that. 

One of the ways this rhetoric shows up is in the field of “machine ethics” — the effort to build moral machines that can serve as our ethical advisors. Transhumanists are especially bullish about this project. The philosopher Eric Dietrich even argues that we should build “the better robots of our nature” — machines that can outperform us morally — and then hand over the world to “homo sapiens 2.0.” What’s your read on that?

I’ve been skeptical about the moral machines project, because it usually ends up just trying to crowdsource moral judgments [and train AI on those human intuitions] — but the whole point is that the crowd isn’t always right! And so it’s a very dangerous thing to crowdsource moral judgments. If you were using a crowdsourced moral machine that was aggregating moral judgments in Nazi Germany, and then tried to automate decisions elsewhere with that, you would be contributing to the expansion of a morally criminal enterprise. 

Crowdsourcing does seem like a problematic approach, but if we’re not going off what the general population thinks, what are we doing instead? Are we proposing following a few philosopher-kings, in which case there may be concerns about that being undemocratic? 

I think there always has been a better route, which is to have morality remain a contested territory. It has to be open to challenge. Understanding what it is to live well with others and what we owe to one another — that conversation can’t ever stop. And so I’m very reluctant to pursue the development of machines that are designed to find an optimal answer and stop there. 

Right — just operating within what people say about moral norms today seems very different from what you call “standing in the space of moral reasons.” Spell out what you mean by that. 

The “space of reasons” was a concept developed by the philosopher Wilfrid Sellars. It’s the realm in which we can explore each other’s reasons for believing something, where we can justify and seek justification from one another. Other philosophers later adapted his idea of the logical space of reasons to be able to think about the moral space of reasons, because we do this in morality too: when we make moral claims upon one another, especially if they’re new and unfamiliar, we have to justify them. Our reasons have to be accessible to one another, so we can figure out what we jointly recognize and accept. 

I think if we had a truly moral machine, it would be able to stand in that space with us. It would be able to articulate reasons and appreciate our reasons, and negotiate those reasons with us in a way that wasn’t just mirroring the consensus that we’d already reached. Because any machine that’s just going to mirror the familiar moral pattern can get into trouble if we end up in a situation where the environment has changed or is new in some way. 

This reminds me of a particular virtue you write about a lot: practical wisdom, or phronesis, to use the ancient Greek term. What is that and why is it so crucial?

Aristotle wrote that we build up the virtues, like honesty, by practice and habituation. Lying is much easier and gets you what you want, but once you get in the habit of telling the truth, you can actually build up a character where being truthful is what comes easily, and you might even have to struggle in the rare case when you need to tell a lie. 

But there are moments where relying on those habits that you’ve built up can actually lead to harm, because something in the situation is new, and your old habit might not be well adapted to the current situation. Wisdom is the intellectual virtue that allows you to recognize that and change your cultivated response to something better. For example, in the civil rights movement, people were able to say: normally, following the law is the moral thing to do, but right now we realize that it isn’t, and in fact civil disobedience in this context is morally necessary. 

Practical wisdom is built up by practice just like all the other virtues, so if you don’t have the opportunity to reason and don’t have practice in deliberating about certain things, you won’t be able to deliberate well later. We need a lot of cognitive exercise in order to develop practical wisdom and retain it. And there is reason to worry about cognitive automation depriving us of the opportunity to build and retain those cognitive muscles. That’s the risk of intellectual and moral deskilling. It’s already happening and I think we have to resist it. 

When I try to give a charitable read on the transhumanist trend we’re seeing, I think the core emotion underlying it is shame about the human condition. And after two world wars, the use of nuclear weapons, a climate crisis, and so on, it kind of makes sense that humanity would be feeling this shame. So I can psychologically understand how there might be an impulse to run away from all that humanness and move toward machines we think will be more objective, even though I don’t agree with it. And I think the place where I struggle is, how do we know to what extent it does make sense to transform ourselves using technology, without sinking into a profound anti-humanism?

There’s a kind of emptiness in transhumanism in that it doesn’t know what we’ve got to wish for, it just wishes for the power to create something else — to create freedom from our bodies, from death, from our limitations. But it’s always freedom from, but never freedom for. Freedom for what? What is the positive vision that we want to move toward?

There’s a deep optimism I have about the human condition. I think morality is not just driven by fear — it’s driven even more by love, by the experience of mutual care and solidarity. Our very first experience of the good is being cared for by another, whether that’s a mother or father or nurse. To me everything else is about just trying to pursue that in new and more expansive forms. So for me there is a freedom for, and it’s rooted in what it is to be a human animal.

Could there be other creatures who are better than we are? I actually think that’s a question that doesn’t make sense. Better than what? They might be better at being what they are, but I think morality is rooted in a particular form of existence that you have. We exist as a particular kind of social, vulnerable, interdependent animal with a lot of excess cognitive energy. All those things factor into what it is to be moral as a human. For me, this abstraction — the idea of some pure universal morality that creatures who are completely unlike us could somehow do better than we can — I think that just fundamentally misunderstands what morality is.

This post was originally published on this site