1 paper across 1 session
We propose FairImagen that debiases text-to-image models by post-processing prompt embeddings, improving fairness across gender and race without retraining the model.