Assistant Professor, New York University
2 papers at NeurIPS 2025
We recast contrastive self‑supervised learning as neural‑manifold packing, employing a physics‑inspired loss to separate sub‑manifold embeddings during pretraining and achieve high accuracy under linear evaluation.
We identify critical flaws in existing datasets and benchmarking protocols for crystal structure prediction in generative modeling of inorganic crystals; we revise datasets and make new benchmarks that account for crystal polymorphism.