logo
today local_bar
Poster Session 2 · Wednesday, December 3, 2025 4:30 PM → 7:30 PM
#904

On the Convergence Rate of AdamW Measured by Norm

NeurIPS Slides Poster OpenReview

Abstract

As the default optimizer for training large language models, AdamW has achieved remarkable success in deep learning. However, its convergence behavior is not theoretically well-understood.
This paper establishes the convergence rate O(K1/4dhttp://www.w3.org/2000/svg" width="400em" height="1.08em" viewBox="0 0 400000 1080" preserveAspectRatio="xMinYMin slice">C) for AdamW measured by norm, where K represents the iteration number, d denotes the model dimension, and C matches the constant in the optimal convergence rate of SGD.
Theoretically, we have π2dhttp://www.w3.org/2000/svg" width="400em" height="1.88em" viewBox="0 0 400000 1944" preserveAspectRatio="xMinYMin slice">E∣∣∇f(x)2 when each element of is generated from Gaussian distribution . Empirically, our experimental results on real-world deep learning tasks reveal . Both support that our convergence rate can be considered to be analogous to the optimal convergence rate of SGD.
Poster