Hybrid Neural-Cognitive Models Reveal Flexible Context-Dependent Information Processing in Reversal Learning

This article has 0 evaluations Published on
Read the full article Related papers
This article on Sciety

Abstract

Reversal learning tasks provide a key paradigm for studying behavioral flexibility, requiring individuals to update choices in response to shifting reward contingencies. While reinforcement learning (RL) models have been widely used to provide interpretable explanations of human behavior on similar learning tasks, recent work has revealed that they often fail to fully account for the complexity of learning dynamics. In contrast, artificial neural networks (ANNs) often achieve substantially higher predictive accuracy, but lack the interpretability afforded by classical RL models.

To bridge this gap, we introduce HybridRNNs—neural-cognitive models that integrate RL-inspired structures with flexible recurrent architectures. Among them, Context-ANN incorporates latent reward history and choice perseverance, and demonstrates improved alignment with human behavior compared to traditional RL models across two datasets. While not perfectly replicating human strategies, Context-ANN achieves comparable predictive accuracy to generic RNNs and offers interpretable value representations. Additional analyses of hidden dynamics reveal structured, context-sensitive internal states that adapt following reversals.

These results suggest that humans may rely on more flexible or context-sensitive learning strategies even in simple reversal tasks, and highlight the potential of HybridRNNs as cognitive models that balance interpretability and predictive accuracy.

Related articles

Related articles are currently not available for this article.