Toyota Technological Institute at Chicago
2 papers at NeurIPS 2025
We derive simple generalization bounds for Markov training processes at any time during training, and then apply them to training with Langevin dynamics to improve existing bounds.
We revisit the single-index models and argue that spherical harmonics, not Hermite polynomials, are a natural basis. We characterize the complexity for any spherically symmetric input measure, & provide several new insights for the Gaussian case.