1 paper across 1 session
We release an open human-annotated preference dataset with 40 thousand samples spanning General, STEM, Code and Multilingual Samples, which can be used to train SOTA Reward Models on RM-Bench and JudgeBench