PhD student, University of Pennsylvania
2 papers at NeurIPS 2025
We propose SECA, a constraint-preserving zeroth-order method that elicits LLM hallucinations via semantically equivalent and coherent rephrasings.