logo
today local_bar
Poster Session 2 · Wednesday, December 3, 2025 4:30 PM → 7:30 PM
#3510 Spotlight

ZeroS: Zero‑Sum Linear Attention for Efficient Transformers

NeurIPS Poster OpenReview

Abstract

Linear attention methods offer Transformers complexity but typically underperform standard softmax attention. We identify two fundamental limitations affecting these approaches: the restriction to convex combinations that only permits additive information blending, and uniform accumulated weight bias that dilutes attention in long contexts.
We propose Zero-Sum Linear Attention (ZeroS), which addresses these limitations by removing the constant zero-order term and reweighting the remaining zero-sum softmax residuals. This modification creates mathematically stable weights, enabling both positive and negative values and allowing a single attention layer to perform contrastive operations.
While maintaining complexity, ZeroS theoretically expands the set of representable functions compared to convex combinations. Empirically, it matches or exceeds standard softmax attention across various sequence modeling benchmarks.
Poster