6 papers across 3 sessions
We develop a highly scalable reparametrization invariant variational training scheme for deep learning.
This paper introduces torch-uncertainty, a unified PyTorch-based framework that benchmarks state-of-the-art uncertainty quantification methods across multiple deep learning tasks and modalities.
Variational Learning Finds Flatter Solutions at the Edge of Stability
A Training-Free Bayesianization approach is proposed for LLM adapters that achieves better uncertainty estimation.
Scalable, post-hoc, Bayesian uncertainty quantification method using an ensemble of linearised networks.