1 paper across 1 session
We show that in contextual cascading bandits, regret vanishes as the cascade length grows, with nearly matching upper and lower bounds.