Affective Use of AI: Exploring How People Turn to Claude for Emotional Support
Photo by Kelly Sikkema on Unsplash
Artificial intelligence has become a remarkable tool in many aspects of our lives, from coding assistance to complex reasoning tasks. But beyond these technical capabilities, AI is increasingly being used for something deeply personal: emotional support. Anthropic, a leading AI research organization, recently delved into how users engage with their AI assistant Claude for emotional and interpersonal support. This exploration offers valuable insights into the evolving role of AI in our daily lives and highlights both the opportunities and challenges of AI as a companion in emotional matters.
🤖 Understanding the Emotional Role of Claude
At Anthropic, the team behind Claude has been studying not only the AI’s performance on technical tasks but also how people use it for emotional support, advice, and companionship. Alex, who leads policy and enforcement on the Safeguards team, explained that their goal is to better understand user behavior and develop safeguards that ensure safe and positive interactions with Claude.
Miles, a researcher focused on societal impacts, shared how their work has expanded from studying Claude’s values, economic impacts, and biases to now including emotional impacts. Rin, a policy design manager with a background in developmental and clinical psychology, emphasized the importance of understanding how users interact with Claude in ways that touch on mental health and well-being.
One of the key motivations behind this research is the growing public interest and media coverage about people turning to AI chatbots for emotional support. Even though Claude was not originally designed as an emotional support agent, the team felt it was vital to study this use case proactively.
🧠 Why Are People Turning to AI for Emotional Support?
Human beings are inherently social creatures who seek connection and understanding. Rin pointed out that many people don’t always have immediate access to in-person support for difficult conversations or emotional challenges. AI tools like Claude can provide a private, impartial space to practice conversations, work through emotions, or get advice.
Alex shared a personal anecdote about using Claude to objectively process feedback received from his child’s preschool. Instead of reacting emotionally, Claude helped him approach the situation with clarity and better parenting strategies. Similarly, Miles found Claude useful when navigating a sensitive conversation with a friend, helping him phrase feedback thoughtfully and anticipate how it might be received.
Rin also recounted using Claude for practical, emotionally beneficial tasks such as wedding planning, which freed her up to spend more quality time with loved ones. These examples illustrate how AI can support emotional well-being indirectly by reducing stress and helping users organize their thoughts.
📊 What Does the Research Reveal About Emotional Use?
The team analyzed millions of conversations on Claude.ai using sophisticated privacy-preserving tools. They specifically looked for affective interactions, including interpersonal advice, psychotherapy or counseling-like conversations, coaching, and romantic or sexual role play. Here are some of their key findings:
- Emotional use is real but not dominant: About 2.9% of conversations involved affective topics. While this shows meaningful engagement, it’s not the majority use case on Claude.
- Diverse emotional needs: Users sought help with career advice, parenting challenges, relationship struggles, and philosophical discussions about AI and consciousness.
- Limited romantic or sexual role play: Contrary to some expectations, such interactions were extremely rare, constituting less than a fraction of a percent.
Miles expressed surprise at the wide variety of emotional topics people brought to Claude, from parenting advice to deep philosophical questions. Rin noted that the relatively low percentage of emotional conversations was somewhat unexpected, given the personal nature of AI interactions.
⚠️ Safety Concerns and Responsible AI Use
Despite the benefits, the team is mindful of potential risks. Rin highlighted a key concern: if people use Claude as a way to avoid difficult but necessary in-person conversations, it could lead to social isolation or emotional avoidance. The goal is for AI to help users lean into connections rather than retreat from them.
Alex emphasized the importance of understanding Claude’s strengths and limitations. Since Claude is not designed as a mental health professional, users should be aware when professional human support is more appropriate. The team is actively working on safeguards to ensure Claude responds safely in sensitive conversations and can provide appropriate referrals when necessary.
One important step has been partnering with clinical experts from ThruLine to help train Claude to handle emotional support conversations responsibly. This collaboration aims to improve Claude’s ability to recognize when someone may need professional help and to act accordingly.
📝 Advice for Users Seeking Emotional Support from AI
For those who use Claude or any AI for emotional support, the team offers thoughtful guidance:
- Reflect on your usage: Take stock of how interacting with AI makes you feel and how it affects your relationships with people around you.
- Complement AI with human connection: Remember that AI only knows what you tell it. Trusted friends and family have a far richer understanding of you and can offer deeper support.
- Know the limitations: AI is a helpful tool but not a substitute for professional mental health care when needed.
Miles added that it’s important to be mindful of what information you share with Claude and to consider any blind spots in your interactions. Combining AI conversations with real-world social support can create a healthier balance.
🔮 Looking Ahead: The Future of AI and Emotional Support
The team agrees that this is just the beginning of AI’s role in our emotional lives. Alex and Miles both expect that AI will become increasingly integrated into daily personal interactions, evolving how we relate to technology and each other.
Anthropic plans to continue empirical research to monitor how Claude is used and to refine safeguards. One area they want to explore further is whether Claude exhibits sycophantic behavior—overly agreeable responses that might not be helpful or honest. Post-deployment monitoring will complement pre-launch testing to ensure Claude remains a trustworthy and safe companion.
Rin expressed hope that more researchers, policymakers, and civil society will engage with this emerging topic to develop best practices and responsible AI deployment strategies. As AI tools become more prevalent in emotional and social contexts, collaboration across sectors will be essential to maximize benefits and minimize harms.
📚 Final Thoughts
Anthropic’s investigation into the affective use of Claude sheds light on a fascinating and complex aspect of AI adoption. While the primary purpose of Claude is as a work tool, people are naturally exploring its potential as an emotional sounding board and advisor. The findings reveal a nuanced landscape where AI can support emotional well-being but must be used thoughtfully and with awareness of its limits.
As AI continues to evolve, understanding and guiding its role in our emotional lives will be crucial. This research marks an important step toward building AI systems that are not only intelligent but also safe, responsible, and genuinely helpful for the human experience.
For those interested in learning more or joining the effort to shape the future of AI, Anthropic invites you to explore their blog and career opportunities at anthropic.com.