4 papers across 3 sessions
We propose a training-free safe generation method to guide text embeddings for safe text-to-image diffusion models.
We introduces a simple yet effective pipeline called Token Bottleneck that facilitates conservative summarization of the observed scene into a bottleneck token while enable capturing of dynamic transitions through the bottleneck token.
We propose a novel query-agnostic KV cache eviction method for multi-query scenario.