PhD student, University of Maryland, College Park
1 paper at NeurIPS 2025
This paper introduces an unsupervised method that disentangles interpretable latent concepts in language model activations that mediate behavior, assuming that sparse changes in these concepts can induce changes in model behavior.