Poster Session 3 · Thursday, December 4, 2025 11:00 AM → 2:00 PM
#406
Scalable Neural Incentive Design with Parameterized Mean-Field Approximation
Abstract
Designing incentives for a multi-agent system to induce a desirable Nash equilibrium is both a crucial and challenging problem appearing in many decision-making domains, especially for a large number of agents .
Under the exchangeability assumption, we formalize this incentive design (ID) problem as a parameterized mean-field game (PMFG), aiming to reduce complexity via an infinite-population limit. We first show that when dynamics and rewards are Lipschitz, the finite- ID objective is approximated by the PMFG at rate . Moreover, beyond the Lipschitz-continuous setting, we prove the same decay for the important special case of sequential auctions, despite discontinuities in dynamics, through a tailored auction-specific analysis.
Built on our novel approximation results, we further introduce our Adjoint Mean-Field Incentive Design (AMID) algorithm, which uses explicit differentiation of iterated equilibrium operators to compute gradients efficiently. By uniting approximation bounds with optimization guarantees, AMID delivers a powerful, scalable algorithmic tool for many-agent (large ) ID.
Across diverse auction settings, the proposed AMID method substantially increases revenue over first-price formats and outperforms existing benchmark methods.