2 papers across 2 sessions
We present NuCLR, a self-supervised framework that learns high-quality, population-aware neuron-level embeddings directly from spike train data using a spatio-temporal transformer and tailored contrastive loss.
Using simple DNNs with dendrite-inspired architectures, we show that single neurons exhibit a morphology-driven phase transition in learnability, with robustness–adaptability trade-offs that suggest new inductive biases for NeuroAI model design.