Professor, University of California Berkeley
3 papers at NeurIPS 2025
We find a small set of neurons whose activations can be redirected at test-time to mitigate high-norm artifacts in Vision Transformers.
A training-free, generative approach that infers object removal order by exploiting statistical co-occurrence and asymmetry priors learned by generative models.
We distill a slow, unlearning-based data attribution method to a feature embedding space for efficient retrieval of highly influential training images.