Assistant Professor, National University of Singapore
3 papers at NeurIPS 2025
an importance-sampling-based method to mitigate over-optimization in Direct Alignment Algorithms for language model alignment
We reduce training variance in equivariant generative models using a low-variance gradient estimator, improving stability and performance across molecular, crystal, and protein generation tasks.