Mixture models for domain-adaptive brain decoding

This article has 0 evaluations Published on
Read the full article Related papers
This article on Sciety

Abstract

A grand challenge in brain decoding is to develop algorithms that generalize across multiple subjects and tasks. Here, we developed a new computational framework to minimize negative transfer for domain-adaptive brain decoding by reframing source selection as a mixture model parameter estimation problem, allowing each source subject to contribute through a continuous mixture weight rather than being outright included or excluded. To compute these weights, we developed a novel convex optimization algorithm based on the Generalized Method of Moments. By using model performance metrics as the generalized moment functions in our GMM optimization, our algorithm also provides theoretical guarantees that the mixture weights are an optimal approximation of the importance weights that underlie domain adaptation theory. When tested on a large-scale brain decoding dataset (n=105 subjects), our new mixture model weighting framework achieved state-of-the-art performance—increasing accuracy up to 2.5% over baseline fine-tuning, double the performance gain compared to previous research in supervised source selection. Notably, these improvements were achieved using significantly less training data (i.e., 62% smaller effective sample sizes), suggesting that our performance gains stem from reduced negative transfer. Collectively, this research advances toward a more principled and generalizable brain decoding framework, laying the mathematical foundation for scalable brain-computer interfaces and other applications in computational neuroscience.

Related articles

Related articles are currently not available for this article.