logo
today local_bar
Poster Session 5 · Friday, December 5, 2025 11:00 AM → 2:00 PM
#5410

Effective Policy Learning for Multi-Agent Online Coordination Beyond Submodular Objectives

NeurIPS OpenReview

Abstract

In this paper, we present two effective policy learning algorithms for multi-agent online coordination (MA-OC) problem. The first one, MA-SPL, not only can achieve the optimal -approximation guarantee for the MA-OC problem with submodular objectives but also can handle the unexplored -weakly DR-submodular and -weakly submodular scenarios, where is the curvature of the investigated submodular functions, denotes the diminishing-return (DR) ratio and the tuple represents the submodularity ratios.
Subsequently, in order to reduce the reliance on the unknown parameters inherent in the MA-SPL algorithm, we then introduce the second online algorithm named MA-MPL. This MA-MPL algorithm is entirely parameter-free and simultaneously can maintain the same approximation ratio as the first MA-SPL algorithm.
The core of our MA-SPL and MA-MPL algorithms is a novel continuous-relaxation technique term as policy-based continuous extension. Compared with the well-established multi-linear extension, a notable advantage of this new policy-based continuous extension is its ability to provide a lossless rounding scheme for any set function, thereby enabling us to tackle the challenging weakly submodular objective functions.
Finally, extensive simulations are conducted to demonstrate the effectiveness of our proposed algorithms.