Data-Driven Reduced Modeling of Recurrent Neural Networks
Abstract
Vanilla Recurrent Neural Networks (RNNs) are widely used in neuroscience to model the collective activity of neurons during behavioral tasks. Their high dimensionality and lack of interpretability, however, often make the fundamental features of their dynamics unclear. In this study, we employ recent nonlinear dynamical system techniques to uncover the core dynamics of several RNNs used in contemporary neuroscience. Specifically, using a data-driven approach, we identify Spectral Submanifolds (SSMs), i.e., low-dimensional attracting invariant manifolds tangent to the eigenspaces of fixed points. The internal dynamics of SSMs serve as nonlinear models that reduce the dimension of the full RNNs by orders of magnitude. Through low-dimensional SSM-reduced models, we give mathematically precise definitions of line and ring attractors, which are intuitive concepts commonly used to explain decision making and working memory. This new level of understanding of RNNs obtained from SSM reduction enables the interpretation of mathematically well-defined and robust structures in neuronal dynamics.
Related articles
Related articles are currently not available for this article.