PhD student, Swiss Federal Institute of Technology Lausanne
2 papers at NeurIPS 2025
We propose LION, a framework for extending Linear Transformers to the bidirectional setting by providing three theoretically equivalent representations: full attention, bidirectional RNN, and chunkwise parallel form.
We propose an efficient strategy for adversarial finetuning of the CLIP text encoder, enabling robustness in zero-shot classification, text-to-image retrieval and text-to-image generation.