Research Scientist, Google
4 papers at NeurIPS 2025
A new framework for analyzing and proving length generalization bounds.
We introduce contextualized n-gram embeddings to extend input embedding layers, improving performance while maintaining fixed accelerator usage during inference.
The paper generalizes the utility-first approach in differential privacy to any sequence of private estimators, incurring at most a doubling of the privacy budget and allowing for hyperparameter tuning without additional privacy cost.