4 papers across 3 sessions
D-PDDM provably monitors model deterioration requiring no training data during deployment, and performs well in real-worlds datasets.
Introducing models that prove their own correctness via an Interactive Proof, and how to learn such models.
This paper introduces a framework to analyze the trustworthiness of near-optimal sparse decision trees from the Rashomon set, showing they can perform as well as models explicitly optimized for fairness, robustness, or privacy.