Research Assistant Professor, The Chinese University of Hong Kong
1 paper at NeurIPS 2025
We propose FairImagen that debiases text-to-image models by post-processing prompt embeddings, improving fairness across gender and race without retraining the model.