Poster Session 3 · Thursday, December 4, 2025 11:00 AM → 2:00 PM
#3900 Spotlight
Characterizing the Expressivity of Fixed-Precision Transformer Language Models
Abstract
Transformer-based language models (LMs) have achieved widespread empirical success, but their theoretical expressive power remains only partially understood.
In this work, we analyze a restricted idealization of fixed-precision transformers with strict future masking, soft attention, and no positional encodings.
We establish that this class of models is exactly as expressive as a specific fragment of linear temporal logic that contains only a single temporal operator: the operator. We further connect this fragment to established classes in formal language theory, automata theory, and algebra, yielding a unified framework for understanding transformer expressivity under this idealization.
Finally, we present empirical results that align closely with our theory: transformers trained on languages within their characterized expressive capacity generalize reliably across sequence lengths, while they consistently fail to generalize on languages beyond it.