1 paper across 1 session
We show that prior LRP-based explainability methods for Transformers overlook positional encoding, and we propose a new approach that propagates relevance across positional components, yielding substantial gains in both XAI for NLP and vision tasks.