Assistant Professor, Rutgers University
2 papers at NeurIPS 2025
ElliCE generates counterfactual explanations with theoretical guarantees that remain valid across Rashomon set by approximating it with an ellipsoid. The method is faster than existing robust baselines and ensures meaningful explanations.
This paper introduces a framework to analyze the trustworthiness of near-optimal sparse decision trees from the Rashomon set, showing they can perform as well as models explicitly optimized for fairness, robustness, or privacy.