Full Professor, Hasso Plattner Institute
3 papers at NeurIPS 2025
We perform large-scale strong MIAs on pre-trained LLMs to clarify the extent of actual privacy risk MIAs pose in this setting.
Easy-to-interpret, unified, tunable bounds on major operational attack risks in privacy-preserving ML and statistical releases that are more accurate than prior methods, using f-DP