4 papers across 3 sessions
We develop a fine-grained f-DP analysis for decentralized federated learning, improving privacy–utility trade-offs under random walk communication and extending to settings with dependent noise.
Uncertainty Quantification for LLMs
We introduce random search neural networks (RSNNs), a more efficient and expressive alternative to random walk neural networks (RWNNs) that achieves strong performance on sparse graphs with significantly fewer samples.
We develop local algorithms for estimating hitting times and effective resistances.