PhD student, University of Virginia
4 papers at NeurIPS 2025
We propose HASTE, which combines holistic alignment (feature and attention) with early termination to accelerate diffusion transformer training by 28× while maintaining quality.
Generate high-performing LoRA parameters from prompts that is unseen in training.