Associate Professor, Tel Aviv University
4 papers at NeurIPS 2025
We present a sharp last-iterate analysis of SGD on smooth convex losses in the interpolation regime, extending prior results beyond linear regression and improving known rates for large, constant stepsizes.
We design a PAC-learner for contextual combinatorial semi-bandits with sparse rewards, with a sample complexity bound that primarily scales with the sparsity parameter rather than the number of arms.
We prove that using regularization with either fixed or increasing strength yields near-optimal and optimal worst-case expected loss rates in realizable continual regression under random task orderings.