2 papers across 2 sessions
We train probes on activations to classify high- vs low-stakes scenarios, find they outperform medium-sized fine-tuned LLMs, and consider applications to monitoring.
We create a sandbox for LLM-agents to elicit goal-directed open-ended strategic deception, evaluate this deceptive capability, and show that linear probes do very well at detecting it, even OOD.