Assistant Professor, University of California, San Diego
4 papers at NeurIPS 2025
An algorithm with theoretically guaranteed convergence rate to optimize a preference function over the Pareto set of a given set of objectives.
We prove that black-box variational inference with the mean-field Gaussian variational family converges in a rate with an explicit dimension dependence of only $O(\log d)$.
Statistically and computationally efficient reduction from approximate DP to pure DP.