Poster Session 2 · Wednesday, December 3, 2025 4:30 PM → 7:30 PM
#2110
LUNA: Efficient and Topology-Agnostic Foundation Model for EEG Signal Analysis
Abstract
Electroencephalography (EEG) offers a non-invasive lens into human brain activity, but building large-scale models is hampered by topological heterogeneity: each public corpus defines its own electrode layout, limiting generalization. We introduce LUNA (Latent Unified Network Architecture), a self-supervised foundation model that reconciles disparate electrode geometries while scaling linearly---not quadratically---with channel count.
LUNA compresses multi-channel EEG into a fixed-size, topology-agnostic latent space via learned queries and cross-attention. Downstream transformer blocks then operate exclusively on this latent representation using patch-wise temporal self-attention, decoupling computation from electrode count.
Pre-trained on TUEG and Siena (21,000 h raw EEG across diverse montages) using a masked-patch reconstruction objective, LUNA transfers effectively to four downstream tasks: abnormality detection, artifact rejection, slowing classification, and emotion recognition. It demonstrates highly competitive performance across several benchmarks, achieving state-of-the-art results on TUAR and TUSL, e.g., on TUAR, while reducing FLOPs by and trimming GPU memory use by up to .
Critically, these gains are consistent across all evaluated electrode configurations. Code is available at https://github.com/pulp-bio/biofoundation