2 papers across 2 sessions
We show that penalizing certain CoT reasoning makes LLMs learn encoding schemes that generalize to unseen examples.