2 papers across 2 sessions
We propose LongBioBench for controllable evaluation on Long-Context Language Models
ViSpec accelerates vision-language model inference by integrating vision-aware speculative decoding with compressed image tokens and global feature injection, achieving up to 3.22× speedup.