Assistant Professor, University of Hong Kong
2 papers at NeurIPS 2025
We prove generalization bounds for neural networks which exploit approximate low-rank structure in the weights.
we established optimal risk bound of $1/(\gamma^2 n)$ for GD with deep ReLU networks