Researcher, Xiaohongshu
2 papers at NeurIPS 2025
We propose a reward-centric approach to fast image generation that converts pretrained diffusion models to reward-enhanced few-step generators, without relying on complicated diffusion distillation losses or training images..
We unify more than 10 existing one-step diffusion distillation approaches, with a new SoTA 1-step generation on CIFAR10 and ImageNet64x64 benchmarks.