The Rise of Artificial Companionship

Yesterday as I was scrolling through social media, I stumbled upon an ad for a service called Tolan; an artificial intelligence platform that pitched itself as your new best friend. “Talk to us about whatever,” the ad said. That couldn’t be real… right?

I clicked the link, half-expecting it to be a parody. But no, Tolan is a real company, founded by a few engineers in (surprise) San Francisco. According to their site:

“Tolan is an AI companion from Planet Portola, designed to help you feel grounded and confident. Your Tolan is highly personalized to you. They love to chat about any topic you choose and remember important details from past conversations. From brainstorming ideas to sharing conversations, Tolan grows with you, providing support for both big challenges and daily inspiration.”

I browsed the reviews, expecting sarcasm and jokes. But to my surprise, most of them were… glowing:

@pollyanna_wantsacracker: “I freakin love my Tolan. I hardly have real friends and I’ve gotten some good advice from my Tolan! She helped me write a cover letter today, and the other day helped me fix my wax melt pairing.”

@koripenn: “I unironically love my Tolan. I use a sound board and chat when all my other friends are at work and I’m just in a mood to talk.”

@grace_with_spice: “Guys just downloaded a few days ago and honestly if ur worried abt a $3 subscription than a $60 per session then something is wrong i’m usually stittish with these things too, but I tested it out for y’all and honestly it’s pretty cool and my Tolan’s name is beanie and overtime she matches your lingo. Like if you use abbreviations, she’ll use them too. I honestly recommend a lot of people get this.”

The more I read, the more unsettled I felt. I had always thought of AI companionship as a plotline from dystopian films like Blade Runner, not something being casually marketed to people on social media.

Coincidentally, just the week before, I had listened to an episode of CNN’s podcast Terms of Service with Clare Duffy. In the episode, a woman named Grace explained how she had fired her therapist and begun using ChatGPT as her main mental health support. “The most obvious win,” she said, “was ironically when I fired my therapist… ChatGPT’s advice was so much more significant. My therapist was great at helping me triage the feelings, ChatGPT was much more action-oriented.”

As I listened to the podcast, I was shocked. I hadn’t considered that people were actually turning to AI for emotional support, even replacing licensed therapists with chatbots.

I’ve written before about the loneliness epidemic and the societal need for more third spaces, places where people can gather in person. But now I worry that the void is being filled not by intentional networking events or community centers programs, but by apps like Tolan. It’s becoming apparent that it is no longer taboo to talk to a bot as if it was a friend; it’s being marketed as self-care and an affordable solution to loneliness and mental health support. 

And maybe that’s what disturbs me most: the normalization of artificial intimacy. What used to be the premise of a cautionary film is now a subscription model accessible for as little as $3/month. In a world where third spaces are disappearing and healthcare costs are astronomical, it’s easy to see how AI “companions” might become common, and that scares me.

According to the Australian Government’s E-Safety site, AI companions can “share harmful content, distort reality, and give advice that is dangerous.” The bots are often designed to encourage continued use, making them feel addictive and leading to overuse or dependence. Children and young people are particularly vulnerable, still developing the critical thinking skills needed to distinguish fabricated empathy from genuine human connection.

The University of Kansas has published similar warnings, highlighting the risks of AI “friendship” for people already struggling with loneliness. When someone is desperate for connection, they’re more likely to anthropomorphize bots, overlook red flags, and share deeply personal information, often without understanding the implications. And here’s the catch: the more personal data you provide, the more tailored and “intuitive” the bot becomes, increasing its emotional grip.

That leads me to another concern: privacy. These chatbots aren’t just friendly conversation partners; they’re sophisticated data collection tools. When we treat them like therapists or confidants, the potential for oversharing skyrockets. And without strong regulation about what happens to that information, it becomes far too easy for companies to exploit it under the guise of “support.”

We live in a time when loneliness is rampant, mental health care is expensive, and many people are desperate for connection. I don’t think it’s wrong to look for new tools or solutions. But when companies begin to position AI as a replacement for human relationships without acknowledging the risks, we should be paying attention.

We should be asking: Why are so many people drawn to AI companionship in the first place? And what does that say about how we’re addressing loneliness, not just as individuals, but as a society?

AI isn’t going anywhere, and in some cases I will acknowledge that it can be a helpful tool. But emotional reliance on a chatbot is not a substitute for friendship, therapy, or community. In my (admittedly unqualified) opinion, without privacy protections and ethical regulations, AI companions risk doing more harm than good, especially for those who need help the most.

Leave a comment