PhD student, University of Pennsylvania
1 paper at NeurIPS 2025
We propose SECA, a constraint-preserving zeroth-order method that elicits LLM hallucinations via semantically equivalent and coherent rephrasings.