Cognitive dissonance in large language models is neither cognitive nor dissonant

This article has 0 evaluations Published on
Read the full article Related papers
This article on Sciety

Abstract

Lehr et al. (2025; hereafter LSHB) claim that GPT-4o exhibits “an analog form of humanlike cognitive selfhood,” based on its behavior in a classical cognitive dissonance paradigm (p. 1). In this brief commentary, we argue that this interpretation is a fundamental mischaracterization of large language models (LLMs), and that the observed results do not entail "cognitive" nor "dissonant" effects. We highlight that seemingly-irrelevant prompts features can influence LLM responses in extreme and unintuitive ways, and that such effects do not require an analog to a sense of self to explain. We additionally demonstrate that ChatGPT-4o exhibits effects of essay valence on its evaluations independent of the authorship of the essay, illustrating suggesting a general sensitivity to the content of prompts, rather than a specific dissonance between its authorship of an essay and its evaluation. We conclude that the alleged cognitive dissonance observed by LSHB in ChatGPT is neither cognitive nor dissonant.

Related articles

Related articles are currently not available for this article.