PhD student, Tel Aviv University
1 paper at NeurIPS 2025
We show that prior LRP-based explainability methods for Transformers overlook positional encoding, and we propose a new approach that propagates relevance across positional components, yielding substantial gains in both XAI for NLP and vision tasks.