Poster Session 5 · Friday, December 5, 2025 11:00 AM → 2:00 PM
#4002
Learning quadratic neural networks in high dimensions: SGD dynamics and scaling laws
Abstract
We study the optimization and sample complexity of gradient-based training of a two-layer neural network with quadratic activation function in the high-dimensional regime, where the data is generated as , where is the 2nd Hermite polynomial, and are orthonormal signal directions.
We consider the extensive-width regime for , and assume a power-law decay on the (non-negative) second-layer coefficients for .
We provide a sharp analysis of the SGD dynamics in the feature learning regime, for both the population limit and the finite-sample (online) discretization, and derive scaling laws for the prediction risk that highlight the power-law dependencies on the optimization time, the sample size, and the model width.
Our analysis combines a precise characterization of the associated matrix Riccati differential equation with novel matrix monotonicity arguments to establish convergence guarantees for the infinite-dimensional effective dynamics.