Full Professor, Columbia University
2 papers at NeurIPS 2025
This work proposes a Bayesian framework with variational inference that adapts the prior and posterior to covariate shifts, improving uncertainty estimates by capturing predictive reliability rather than just input dissimilarity.
We develop a score-based variational inference method that learns a product-of-t-experts model via a Feynman-identity latent-variable formulation, reducing inference to a sequence of convex quadratic programs with provable convergence.