PhD student, Johannes Kepler University Linz
1 paper at NeurIPS 2025
We demonstrate that using reverse Kullback-Leibler loss with the log-derivative trick for training diffusion bridges outperforms the commonly used Log Variance loss, providing better results and more stable training.