1 paper across 1 session
LLMs show a utilitarian boost in moral judgment when reasoning in groups, similar to humans, but driven by distinct model-specific mechanisms, highlighting key considerations for multi-agent alignment and moral reasoning.