The Illusion of Empathy: Why Users Distrust GPT-4 Chatbots for Mental Health Screenings

This article has 0 evaluations Published on
Read the full article Related papers
This article on Sciety

Abstract

Conversational AI holds promise for scalable mental health screening, yet its impact on therapeuticalliance and user perceptions remains underexplored. This preregistered, cross-sectional, randomizedmixed-methods experiment (N = 149) evaluated the effects of an empathic GPT-4-powered chatbot (Elli)versus a static PHQ-9/GAD-7 form on trust, comfort, empathy, and emotional disclosure. Participantswere randomly assigned to either condition. Quantitative outcomes included measures of confidence,comfort, and perceived empathy. Qualitative feedback was analyzed thematically. Primary analysesemployed independent t-tests and Mann–Whitney U tests, with Cohen’s d effect sizes reported.Exploratory analyses included gender and age interactions, as well as mediation modeling. All analyseswere conducted in Python and are openly available on GitHub. Trust in the Elli chatbot was significantlylower than in the static form (p = .004, d = –0.49). Comfort and empathy ratings showed no significantdifferences. Dropout analysis revealed no condition-related attrition (χ² = 0.37, p = .54). Qualitativefeedback highlighted discomfort with artificial empathy and a perceived lack of human presence in thechatbot condition. No significant differences emerged for PHQ-9 or GAD-7 severity. Mediation analysisrevealed that empathy did not account for the trust gap. Contrary to expectations, the GPT-4 chatbotreduced user trust compared to a static form. These findings suggest emotional authenticity may bemore critical than simulated empathy in digital mental health tools.

Related articles

Related articles are currently not available for this article.