Associate Professor, The Institute of Statistical Mathematics
3 papers at NeurIPS 2025
We construct a family of smooth convex loss with linear surrogate regret bounds via infimal convolution and Fenchel–Young losses, overcoming the smoothness–regret trade-off for arbitrary discrete target losses.
We present an efficient $O(n \ln T)$-regret method for online inverse linear optimization, extend it to suboptimal feedback, and provide an $\Omega(n)$-regret lower bound.