logo
today local_bar
Poster Session 2 West
Wednesday, December 11, 2024 4:30 PM → 7:30 PM
Poster #7101

Transformers Represent Belief State Geometry in their Residual Stream

Adam Shai, Paul Riechers, Lucas Teixeira, Alexander Oldenziel, Sarah Marzen
Poster

Abstract

What computational structure are we building into large language models when we train them on next-token prediction? Here, we present evidence that this structure is given by the meta-dynamics of belief updating over hidden states of the data-generating process. Leveraging the theory of optimal prediction, we anticipate and then find that belief states are linearly represented in the residual stream of transformers, even in cases where the predicted belief state geometry has highly nontrivial fractal structure. We investigate cases where the belief state geometry is represented in the final residual stream or distributed across the residual streams of multiple layers, providing a framework to explain these observations. Furthermore, we demonstrate that the inferred belief states contain information about the entire future, beyond the local next-token prediction that the transformers are explicitly trained on. Our work provides a general framework connecting the structure of training data to the geometric structure of activations inside transformers.