2 papers across 2 sessions
We introduce a representation learning framework that provides high-confidence fairness guarantees with controllable error thresholds and confidence levels via adversarial inference.
We propose a kernel-based equalized statistic to quantify the accuracy-fairness trade-off among independence-, separation-, and calibration-based constraints, identifying the best suited criterion to preserve predictive accuracy.