1 paper across 1 session
Current LLM code evaluation is flawed by weak test cases; we propose SAGA, a novel method using human expertise to generate superior verifiers, demonstrated by our new CodeComPass benchmark and TCGCoder-7B model, for more reliable assessment.