Poster Session 6 · Friday, December 5, 2025 4:30 PM → 7:30 PM
#113
MindGYM: What Matters in Question Synthesis for Thinking-Centric Fine-Tuning?
Abstract
Large foundation models face challenges in acquiring transferable, structured thinking abilities, especially when supervised with rigid templates or crowd-annotated instruction datasets. Unlike prior approaches, we focus on a thinking-centric data synthesis paradigm that enables models to evolve through self-generated, cognitively guided data.
We propose MindGYM, a structured and scalable framework for question synthesis, composed of:
- Cognitive Thinking Process Injection, which infuses high-level reasoning objectives to shape the model’s synthesis behavior;
- Seed Single-Hop Question Synthesis, generating atomic questions from diverse semantic types to encourage broader thinking; and
- Challenging Multi-Hop QA Synthesis, composing more complex multi-hop questions based on QA seeds for deeper reasoning.
MindGYM improves performance on six reasoning benchmarks, achieving gains of up to 16% on MathVision using only 400 data samples, and generalizable improvements across different model sizes and architectures. MindGYM underscores the viability of self-challenging mechanisms in refining large model capabilities while minimizing human intervention and resource demands. Code and data are released to promote data-centric research into self-evolving foundation models driven by their internal reasoning capabilities.