3 papers across 2 sessions
We formalise quoting in conversation, release a training set and benchmark, and introduce a tiny adapter that lets LLMs exploit quoted spans with zero prompt overhead.
We use in-context learning as weak supervision to train a student model that internalizes demonstration-induced latent shifts via adapter tuning, enabling efficient inference with improved generalization.