PhD student, State University of New York at Stony Brook
1 paper at NeurIPS 2025
We propose RBD, a plug-in module that detects and corrects biased LLM evaluations through structured reasoning, significantly improving accuracy, consistency, and scalability across multiple bias types and evaluator models.