2 papers across 2 sessions
We derive Riemannian metrics from pretrained EBMs to compute data-aware geodesics. Our approach outperforms standard methods across datasets, offering a scalable solution for learning data geometry in high-dimensional spaces.
We analyze imbalanced training loss, showing that gradient descent dynamics can gradually reduce bias and recover minority-specific features with longer training.