Poster Session 2 · Wednesday, December 3, 2025 4:30 PM → 7:30 PM
#3906
Exact Expressive Power of Transformers with Padding
Abstract
Chain of thought is a natural inference-time method for increasing the computational power of transformer-based large language models (LLMs), but comes at the cost of sequential decoding. Are there more efficient alternatives to expand a transformer's expressive power without adding parameters?
We consider transformers with padding tokens as a form of parallelizable test-time compute. We show that averaging-hard-attention, masked-pre-norm transformers with polynomial padding recognize precisely the class -uniform of extremely parallelizable problems. While the upper bound was known, proving a matching lower bound had been elusive. Further, our novel analysis reveals the precise expanded power of padded transformers when coupled with another form of inference-time compute, namely dynamically increasing depth via looping. Our core technical contribution is to show how padding helps bring the notions of complete problems and reductions, which have been a cornerstone of classical complexity theory, to the formal study of transformers. Armed with this new tool, we prove that padded transformers with looping on inputs of length recognize exactly the class -uniform of moderately parallelizable problems.
Thus, padding and looping together systematically expand transformers' expressive power: with polylogarithmic looping, polynomially padded transformers recognize precisely the class -uniform , the best that could be expected without losing parallelism (unless ). Our results thus motivate further exploration of padding and looping as parallelizable alternatives to chain of thought for test-time compute.