3 papers across 2 sessions
We study token-level watermarking in the context of autoregressive image generation models.
We show that diffusion language models are a lot more sample-efficient than standard autoregressive language models, due to their ability to learn from different token orderings.
We develop a novel speculative decoding framework for protein generation, using structure aware guidance from k-mers to generate proteins with higher likelihood and structural confidence.