1 paper across 1 session
We recast contrastive self‑supervised learning as neural‑manifold packing, employing a physics‑inspired loss to separate sub‑manifold embeddings during pretraining and achieve high accuracy under linear evaluation.