3 papers across 2 sessions
We introduce PerturBench: a comprehensive model development and benchmarking framework for perturbation response modeling in single cells.
We show that diffusion language models are a lot more sample-efficient than standard autoregressive language models, due to their ability to learn from different token orderings.