2 papers across 2 sessions
We use in-context learning as weak supervision to train a student model that internalizes demonstration-induced latent shifts via adapter tuning, enabling efficient inference with improved generalization.
We use mechanistic interpretability to reverse engineer how neural networks break protected cryptographic implementations via side-channel analysis.