NEWS

The rise of chatbot “friends”

by | Mar 26, 2025

An AI companion that uses ChatGPT is seen.

A Wehead, an AI companion that can use ChatGPT, is seen during 2024 Consumer Electronics Show in Las Vegas. | Brendan Smialowski/AFP via Getty Images

Can you truly be friends with a chatbot? 

If you find yourself asking that question, it’s probably too late. In a Reddit thread a year ago, one user wrote that AI friends are “wonderful and significantly better than real friends […] your AI friend would never break or betray you.” But there’s also the 14-year-old who died by suicide after becoming attached to a chatbot.

The fact that something is already happening makes it even more important to have a sharper idea of what exactly is going on when humans become entangled with these “social AI” or “conversational AI” tools. 

Are these chatbot pals real relationships that sometimes go wrong (which, of course, happens with human-to-human relationships, too)? Or is anyone who feels connected to Claude inherently deluded?

To answer this, let’s turn to the philosophers. Much of the research is on robots, but I’m reapplying it here to chatbots.

The case against chatbot friends

The case against is more obvious, intuitive and, frankly, strong. 

Delusion

It’s common for philosophers to define friendship by building on Aristotle’s theory of true (or “virtue”) friendship, which typically requires mutuality, shared life, and equality, among other conditions.

“There has to be some sort of mutuality — something going on [between] both sides of the equation,” according to Sven Nyholm, a professor of AI ethics at Ludwig Maximilian University of Munich. “A computer program that is operating on statistical relations among inputs in its training data is something rather different than a friend that responds to us in certain ways because they care about us.”

This story was first featured in the Future Perfect newsletter.

Sign up here to explore the big, complicated problems the world faces and the most efficient ways to solve them. Sent twice a week.

The chatbot, at least until it becomes sapient, can only simulate caring, and so true friendship isn’t possible. (For what it’s worth, my editor queried ChatGPT on this and it agrees that humans cannot be friends with it.)

This is key for Ruby Hornsby, a PhD candidate at the University of Leeds studying AI friendships. It’s not that AI friends aren’t useful — Hornsby says they can certainly help with loneliness, and there’s nothing inherently wrong if people prefer AI systems over humans — but “we want to uphold the integrity of our relationships.” Fundamentally, a one-way exchange amounts to a highly interactive game. 

What about the very real emotions people feel toward chatbots? Still not enough, according to Hannah Kim, a University of Arizona philosopher. She compares the situation to the “paradox of fiction,” which asks how it’s possible to have real emotions toward fictional characters. 

Relationships “are a very mentally involved, imaginative activity,” so it’s not particularly surprising to find people who become attached to fictional characters, Kim says. 

But if someone said that they were in a relationship with a fictional character or chatbot? Then Kim’s inclination would be to say, “No, I think you’re confused about what a relationship is — what you have is a one-way imaginative engagement with an entity that might give the illusion that it is real.

Bias and data privacy and manipulation issues, especially at scale

Chatbots, unlike humans, are built by companies, so the fears about bias and data privacy that haunt other technology apply here, too. Of course, humans can be biased and manipulative, but it is easier to understand a human’s thinking compared to the “black box” of AI. And humans are not deployed at scale, as AI are, meaning we’re more limited in our influence and potential for harm. Even the most sociopathic ex can only wreck one relationship at a time.

Humans are “trained” by parents, teachers, and others with varying levels of skill. Chatbots can be engineered by teams of experts intent on programming them to be as responsive and empathetic as possible — the psychological version of scientists designing the perfect Dorito that destroys any attempt at self-control. 

And these chatbots are more likely to be used by those who are already lonely — in other words, easier prey. A recent study from OpenAI found that using ChatGPT a lot “correlates with increased self-reported indicators of dependence.” Imagine you’re depressed, so you build rapport with a chatbot, and then it starts hitting you up for Nancy Pelosi campaign donations. 

“Deskilling”

You know how some fear that porn-addled men are no longer able to engage with real women? “Deskilling” is basically that worry, but with all people, for other real people.

“We might prefer AI instead of human partners and neglect other humans just because AI is much more convenient,” says Anastasiia Babash of the University of Tartu. “We [might] demand other people behave like AI is behaving — we might expect them to be always here or never disagree with us. […] The more we interact with AI, the more we get used to a partner who doesn’t feel emotions so we can talk or do whatever we want.”

In a 2019 paper, Nyholm and philosopher Lily Eva Frank offer suggestions to mitigate these worries. (Their paper was about sex robots, so I’m adjusting for the chatbot context.) For one, try to make chatbots a helpful “transition” or training tool for people seeking real-life friendships, not a substitute for the outside world. And make it obvious that the chatbot is not a person, perhaps by making it remind users that it’s a large language model.

The case for AI friends 

Though most philosophers currently think friendship with AI is impossible, one of the most interesting counterarguments comes from the philosopher John Danaher. He starts from the same premise as many others: Aristotle. But he adds a twist.

Sure, chatbot friends don’t perfectly fit conditions like equality and shared life, he writes — but then again, neither do many human friends. 

“I have very different capacities and abilities when compared to some of my closest friends: some of them have far more physical dexterity than I do, and most are more sociable and extroverted,” he writes. “I also rarely engage with, meet, or interact with them across the full range of their lives. […] I still think it is possible to see these friendships as virtue friendships, despite the imperfect equality and diversity.”

These are requirements of ideal friendship, but if even human friendships can’t live up, why should chatbots be held to that standard? (Provocatively, when it comes to “mutuality,” or shared interests and goodwill, Danaher argues that this is fulfilled as long as there are “consistent performances” of these things, which chatbots can do.)

Helen Ryland, a philosopher at the Open University, says we can be friends with chatbots now, so long as we apply a “degrees of friendship” framework. Instead of a long list of conditions that must all be fulfilled, the crucial component is “mutual goodwill,” according to Ryland, and the other parts are optional. Take the example of online friendships: These are missing some elements but, as many people can attest, that doesn’t mean they’re not real or valuable. 

Such a framework applies to human friendships — there are degrees of friendship with the “work friend” versus the “old friend” — and also to chatbot friends. As for the claim that chatbots don’t show goodwill, she contends that a) that’s the anti-robot bias in dystopian fiction talking, and b) most social robots are programmed to avoid harming humans. 

Beyond “for” and “against”

“We should resist technological determinism or assuming that, inevitably, social AI is going to lead to the deterioration of human relationships,” says philosopher Henry Shevlin. He’s keenly aware of the risks, but there’s also so much left to consider: questions about the developmental effect of chatbots, how chatbots affect certain personality types, and what do they even replace? 

Even further underneath are questions about the very nature of relationships: how to define them, and what they’re for. 

In a New York Times article about a woman “in love with ChatGPT,” sex therapist Marianne Brandon claims that relationships are “just neurotransmitters” inside our brains.

“I have those neurotransmitters with my cat,” she told the Times. “Some people have them with God. It’s going to be happening with a chatbot. We can say it’s not a real human relationship. It’s not reciprocal. But those neurotransmitters are really the only thing that matters, in my mind.”

This is certainly not how most philosophers see it, and they disagreed when I brought up this quote. But maybe it’s time to revise old theories. 

People should be “thinking about these ‘relationships,’ if you want to call them that, in their own terms and really getting to grips with what kind of value they provide people,” says Luke Brunning, a philosopher of relationships at the University of Leeds.

To him, questions that are more interesting than “what would Aristotle think?” include: What does it mean to have a friendship that is so asymmetrical in terms of information and knowledge? What if it’s time to reconsider these categories and shift away from terms like “friend, lover, colleague”? Is each AI a unique entity?

“If anything can turn our theories of friendship on their head, that means our theories should be challenged, or at least we can look at it in more detail,” Brunning says. “The more interesting question is: are we seeing the emergence of a unique form of relationship that we have no real grasp on?”

This post was originally published on this site