Skip to main content

Can a chatbot make you feel less lonely?

Can a chatbot make you feel less lonely?

As AI chatbots like ChatGPT, Google Gemini and Microsoft Copilot get better at engaging in conversation and picking up on emotional cues, millions of Americans are interacting with them in everyday life. A June 2025 Pew Research Center survey found that 34% of U.S. adults—and 58% of adults under 30—have tried ChatGPT.

Jason Thatcher, a professor of information systems at the Leeds School of Business, is in the early stages of a research project that explores how emotionally adaptive chatbots could be designed to better support users, including whether they might address loneliness.

Jason Thatcher

Jason Thatcher

“If we’re going to design AI that’s emotionally sensitive and able to adapt to people’s identities and ways of thinking, then loneliness is an obvious place to focus because it’s a real problem,” he said.

CU Boulder Today recently talked with Thatcher about his research into emotionally adaptive chatbots and how design choices could shape human-AI interaction.

How should chatbots adapt to what users actually want from them?

People don’t always want the same kind of interaction from a chatbot. Sometimes they need a friend or companion; other times, a teammate, mentor or straightforward expert. 

A well-designed chatbot should adjust both what it says and how it says it—including tone, clarity, cognitive demand and alignment with the user’s values and context. Emotional responsiveness doesn’t always mean being warm or encouraging. Sometimes users want clear, direct guidance. The goal is to meet the user’s needs in that moment rather than assuming one style fits all.

Why focus on loneliness?

Loneliness is widespread and often described as a crisis. If emotionally adaptive AI is meant to improve lives, helping people feel less lonely is a meaningful, socially relevant starting point.

How will you study whether adaptive chatbots reduce loneliness?

We plan to track people over time and check in regularly through experience sampling. The goal is to see whether these adaptive designs actually make users feel less lonely—something we don’t yet know.

Are there risks to designing emotionally adaptive chatbots?

Yes. People can become overly reliant on bots that feel supportive or persuasive. There’s also the risk of manipulation, that AI could replace or weaken human connections rather than complement them. That’s why it’s important for designers to set clear boundaries.

What should designers keep in mind when building chatbots?

Not every chatbot should be a friend. Designers should build bots for the role users need—whether that’s companion, teammate, expert or coach—rather than assuming emotional closeness is always the goal.

Can you give examples of matching a bot’s style to its task?

A companion bot might focus on support and reducing loneliness. A productivity bot should minimize cognitive load. A learning bot might nudge someone firmly to stay on track, while a coach or mentor bot should assert authority and guidance.

How do you prevent chatbots from becoming too human-like or crossing boundaries?

One important step is asking users directly what they’re comfortable with. There’s also an issue where systems that feel too human can make people uncomfortable or even repelled. Designers need to be careful about how realistic or emotionally expressive bots become and give users control over those settings.

We have to be really careful about the boundary conditions. When is the bot being a good helper, and when does it start to become manipulative? When is it actually leading people to believe that it cares? We don’t want people to be fooled into thinking the bot is sentient.

Any advice for people just starting with AI chatbots?

Learn the basics of what chatbots can and can’t do. Be mindful of what you upload because content shared with a chatbot could potentially be stored or subpoenaed. And remember that just because a bot feels conversational doesn’t mean it’s a person. 

Experiment to understand its capabilities. For example, ask the bot to take on a specific perspective to challenge your assumptions and provide analytical feedback instead of always agreeing.

For example, ask what would a skeptical expert or an adversarial attorney think of this? Or ask it to give you a smart, sensitive person's reaction to an email you wrote. Ask it to take on the role of a fussy copyeditor. Just remember that the bot is approximating based on what you told it to do.

How do you see AI fitting into human work and creativity overall?

AI isn’t replacing human skill or judgment. Like spellcheck changed writing without ending it, AI can support decision-making and creativity, helping people think more clearly rather than doing the thinking for them. 

What’s the goal of this research?

What we’re really interested in is how people feel when they interact with these systems. If the bot listens well and people feel like they can communicate effectively, that’s meaningful. But we have to ask: Are they actually less lonely, or just interacting more with technology? That’s what we want to find out.

 

CU Boulder Today regularly publishes Q&As on news topics through the lens of scholarly expertise and research/creative work. The responses here reflect the knowledge and interpretations of the expert and should not be considered the university position on the issue. All publication content is subject to edits for clarity, brevity and university style guidelines.