Assistant Professor, University of Alberta
2 papers at NeurIPS 2025
Interpretability methods based on linear, orthogonal features fall short for modern neural representations, which are often hierarchical and nonlinear. Better results come from aligning methods with the true structure of these representations.
A neural operator framework that maps biologically interpretable embeddings of neuron models to realistic neuronal responses, thereby enabling the fast generation of ensembles of neuron models that capture experimental variability.