Lightweight Transformer Models for Biomedical Signal Processing: Trends, Challenges, and Future Directions

This article has 0 evaluations Published on
Read the full article Related papers
This article on Sciety

Abstract

Transformers, initially transformative in natural language processing and later in vision, are now rapidly gaining traction in biomedical signal processing. Electrocardiograms (ECG), electroencephalograms (EEG), and electromyograms (EMG) are critical for diagnosing cardiovascular, neurological, and neuromuscular conditions. Yet, canonical Transformers remain computationally heavy, hindering deployment in wearables and embedded devices that require real-time, low-power inference. This paper presents the first structured survey (2023–2025) of lightweight Transformer models for biosignals, introducing a novel taxonomy of efficiency strategies, including architectural compaction, efficient attention, pruning, quantization, distillation, hybrids, and hardware-aware neural architecture search. We review nearly 100 recent works across ECG, EEG, and EMG, providing comparative analysis of model size, FLOPs/MACs, latency, and accuracy. A case study on the MIT-BIH dataset demonstrates that a compact Transformer achieves 98.40% accuracy with sub-millisecond latency, outperforming a CNN baseline and reducing false negatives, which are critical in clinical settings. To ensure reproducibility, the full implementation and training scripts are made available in an open-access Google Colab notebook. We conclude with open challenges—dataset scale and bias, subject-wise generalization, interpretability, and deployment trade-offs—and propose a roadmap for multimodal, federated, and explainable biosignal Transformers optimized for next-generation digital health.

Related articles

Related articles are currently not available for this article.