1 paper across 1 session
We use in-context learning as weak supervision to train a student model that internalizes demonstration-induced latent shifts via adapter tuning, enabling efficient inference with improved generalization.