Postdoc, Stanford University
2 papers at NeurIPS 2025
This paper introduces a training-free method to make diffusion models safer by directly modifying their sampling process to avoid generating undesirable content like NSFW images or copyrighted material, without needing to retrain the models.
We propose grafting, a simple approach to materialize new architectures by editing pretrained diffusion transformers. It enables architectural exploration under small compute budgets.