3 papers across 3 sessions
Latent recalibration learns a radial transform that calibrates normalizing flows, preserving an explicit PDF and improving NLL.
This paper introduces torch-uncertainty, a unified PyTorch-based framework that benchmarks state-of-the-art uncertainty quantification methods across multiple deep learning tasks and modalities.
We propose a computationally efficient localized PAC-Bayes prior directly integrated into training, yielding tight generalization certificates, individual prediction guarantees, robustness, and calibration for deep networks.