MS student, Shenzhen University
1 paper at NeurIPS 2025
This paper proposes the Multi-Task Learning with Knowledge Distillation (MTL-KD) to train a single heavy decoder model without labeled data to solve various VRP variants.