Why People Prefer Chatbots Over Colleagues for Discussions: A Deep Dive into the Rise of AI Conversational Agents

Web Editor

August 22, 2025

a person holding a cell phone in front of a laptop computer on a desk with a judge's gaven, Edi Rama

The Allure of Chatbots: Empathy, Anonymity, and Immediate Availability

Why do people find it easier to converse with chatbots than with coworkers? The rise of generative artificial intelligence (AI) is not only transforming how we work but also reshape our interactions. ChatGPT, Gemini, Copilot, and DeepSeek are now exploring personal sentiments and thoughts.

Chatbots, as explained by Amazon Web Services, are programs that mimic human conversations and can provide in-depth information. They have the ability to offer responses with a level of empathy that tends to please users.

This sense of trust explains why people turn to conversational agents for emotional support or advice on sensitive topics. A study by the Journal of Medical Internet Research (JMIR) highlights that people in Western countries are more willing to share mental health issues with AI.

  • Ease of Sharing: Employees prefer chatbots for discussing stress, anxiety, or conflicts with colleagues or superiors rather than seeking help from counselors, as AI promises anonymity and privacy with a sense of trust.
  • Immediate Availability: Chatbots are available 24/7, which is crucial for those needing immediate help at any time of the day. This aspect is especially valuable to the 31% of workers who fear judgment if they express their need for psychological help, according to a study on workplace well-being by the Association for Computing Machinery.

However, the same study acknowledges that these virtual agents can provide “misleading information or dangerous advice,” as users may stop practicing social skills while relying on AI.

Reasons for Preferring Chatbots

Communication deficiencies, power abuse, harassment, and bullying in the workplace are reasons that lead employees to prefer chatbots, especially when they feel uncomfortable discussing the topic with another person, according to psychotherapist Joel Cuéllar López.

“They talk to a chatbot because they can see objective responses that won’t put them at risk,” he adds. This trend is more common among introverted individuals and, in some cases, those with narcissistic personality disorders.

Ana Alvarado, a workplace psychologist, notes that stress, anxiety, and demotivation foster this virtual interaction. This phenomenon is even more noticeable among younger generations.

However, she cautions that while AI responses may seem “accommodating,” they are not always objective or suitable for resolving interpersonal issues. In this space of indulgence, the user is always right and receives praise, which can be dangerous if we remember that AI tends to hallucinate.

Hallucination refers to when a generative AI model provides responses containing information that isn’t real. The issue lies in the fact that current systems “were never designed to be purely accurate,” but rather to always provide a response, increasing their margin of error, as explained by Scientific American.

Impact on Human Interactions

Despite their innovation and benefits, relying on AI poses a risk. A Cureus study on the safety of language models for addressing depression reveals that AI language models can be “potentially dangerous” in severe mental health cases requiring immediate attention.

In contrast, a comparative study published in JMIR Mental Health found that AI agents often offer validation and generic advice without deep investigation, making them unsuitable for real therapy.

“The robot is designed to say certain things and will always say what the person wants to hear to feel good,” summarizes Ana Alvarado, who adds that this effect can have a negative long-term impact, increasing anxiety in human interactions and decreasing decision-making and conflict resolution abilities at work. “They stop developing skills.”

Both psychologists agree that discussions about conflicts and mental health are more common among younger users, but in all cases, there’s a greater tendency to become dependent on AI. “They start with academic or professional inquiries and end up becoming dependent,” asserts Joel Cuéllar.

Adding to this is the “blind faith” in algorithm responses, which generates cognitive wear and tear affecting critical thinking and analytical abilities, according to psychotherapist Joel Cuéllar López.

Experts recommend a more cautious approach to chatbots. The suggestion is straightforward: avoid personal consultations or those requiring professional psychological evaluation and help for issues like anxiety, stress, or office conflicts, as technology should not replace human connection.

The solution to workplace communication challenges, experts say, lies in strengthening interpersonal relationships based on trust and empathy. This can be reinforced with increased education on appropriate technology use, enabling workers to know when and when not to utilize it.