PhD student, Harvard University
1 paper at NeurIPS 2025
Easy-to-interpret, unified, tunable bounds on major operational attack risks in privacy-preserving ML and statistical releases that are more accurate than prior methods, using f-DP