MS student, East China Normal University
1 paper at NeurIPS 2025
We propose a data selection method that leverages sparse, monosemantic neuronal activations learned via a sparse autoencoder to improve task-specific instruction tuning for large language models.