1 paper across 1 session
We propose SECA, a constraint-preserving zeroth-order method that elicits LLM hallucinations via semantically equivalent and coherent rephrasings.