logo
today local_bar
Poster Session 1 · Wednesday, December 3, 2025 11:00 AM → 2:00 PM
#3102

Can Diffusion Models Disentangle? A Theoretical Perspective

NeurIPS OpenReview

Abstract

This paper presents a novel theoretical framework for understanding how diffusion models can learn disentangled representations with commonly used weak supervision such as partial labels and multiple views.
Within this framework, we establish identifiability conditions for diffusion models to disentangle latent variable models with stochastic, non-invertible mixing processes. We also prove finite-sample global convergence for diffusion models to disentangle independent subspace models.
To validate our theory, we conduct extensive disentanglement experiments on subspace recovery in latent subspace Gaussian mixture models, image colorization, denoising, and voice conversion for speech classification.
Our experiments show that training strategies inspired by our theory, such as style guidance regularization, consistently enhance disentanglement performance.