?
today
local_bar
search
Nash Learning from Human Feedback
1 paper across 1 session
Poster Session 5
1 paper
Friday, December 5, 2025 · 11:00 AM → 2:00 PM
Exhibit Hall C,D,E
Distortion of AI Alignment: Does Preference Optimization Optimize for Preferences?
star
#1206
·
Paul Gölz, Nika Haghtalab, Kunhe Yang