Full Professor, New York University
1 paper at NeurIPS 2025
We recast contrastive self‑supervised learning as neural‑manifold packing, employing a physics‑inspired loss to separate sub‑manifold embeddings during pretraining and achieve high accuracy under linear evaluation.