Poster Session 4 · Thursday, December 4, 2025 4:30 PM → 7:30 PM
#4615
Recurrent Attention-based Token Selection for Efficient Streaming Video-LLMs
Abstract
Video Large Language Models (Video-LLMs) excel at understanding videos in-context, assuming full access to the video when answering queries. However, these models face challenges in streaming scenarios where hour-long videos must be processed online, and questions need timely responses.
In this work, we propose a training-free approach compatible with standard Video-LLMs, leveraging three key concepts:
- LLM-informed selection of visual tokens to identify those that the LLM has attended to and contributed to its understanding of each short clip. Our attention-based selection allows us to discard up to ~95% of unimportant visual tokens with minimal performance loss;
- Hierarchical selection of tokens combined with natural language understanding of each processed clip;
- Caption-based question answering for lightweight and accurate responses.
Our method achieves state-of-the-art performance on streaming video benchmarks, striking a balance between efficiency and effectiveness.