4 papers across 3 sessions
We introduce a new probabilistic method and task for quantifying the privacy risk of a document with personal attributes using large language models.
We introduce SteerConf, a method to steer the verbalized confidence of LLMs and calibrate these confidences, enhancing their reliability and trustworthiness in practical applications.
Prompting is legitimate behavioral science that has unlocked most major LLM capabilities, not the unscientific "alchemy" it's often dismissed as.