Independent Researcher, LASR Labs
2 papers at NeurIPS 2025
We show that penalizing certain CoT reasoning makes LLMs learn encoding schemes that generalize to unseen examples.