1 paper across 1 session
Attention heads in text-generative models specialize in semantic and visual concepts. Leveraging this property, we can reliably suppress or enhance specific attributes in both language and vision-language tasks.