Learning to Act and Cooperate for Distributed Black-Box Consensus Optimization
Abstract
A trajectory-driven framework uses large language models to guide agent behavior and cooperation patterns in distributed black-box consensus optimization, improving solution quality and efficiency.
Distributed blackbox consensus optimization is a fundamental problem in multi-agent systems, where agents must improve a global objective using only local objective queries and limited neighbor communication. Existing methods largely rely on handcrafted update rules and static cooperation patterns, which often struggle to balance local adaptation, global coordination, and communication efficiency in heterogeneous nonconvex environments. In this paper, we take an initial step toward trajectory-driven self-design for distributed black-box consensus optimization. We first redesign the agent-level swarm dynamics with an adaptive internal mechanism tailored to decentralized consensus settings, improving the balance between exploration, convergence, and local escape. Built on top of this adaptive execution layer, we propose Learning to Act and Cooperate (LACMAS), a trajectorydriven framework in which large language models provide sparse highlevel guidance for shaping both agentinternal action behaviors and agentexternal cooperation patterns from historical optimization trajectories. We further introduce a phased cognitive scheduling strategy to activate different forms of adaptation in a resource-aware manner. Experiments on standard distributed black-box benchmarks and real-world distributed tasks show that LAC-MAS consistently improves solution quality, convergence efficiency, and communication efficiency over strong baselines, suggesting a practical route from handcrafted distributed coordination toward self-designing multi-agent optimization systems.
Community
Studied consensus based black box optimization from a learning perspective and proposed LAC MAS, an LLM assisted multi-agent framework that jointly learns how agents act and how they cooperate. By introducing adaptive regulation of agent internal behaviors and agent-external coordination, and orchestrating their interaction through phased cognitive guidance, LAC MAS effectively balances exploration, convergence, and communication efficiency.
Extensive benchmark experiments and ablation studies demonstrate that learning to act and learning to cooperate play complementary roles in distributed optimization. Internal behavioral learning improves solution quality and escape capability, while cooperative learning accelerates consensus formation and reduces communication cost. Their coordinated integration leads to stable and consistently strong performance across diverse problem landscapes.
Work focuses on improving the efficiency and robustness of distributed black box optimization in multi-agent systems. Potential applications include cooperative sensing, resource allocation, and distributed control, which may contribute to more efficient and resilient large-scale systems. The proposed framework does not involve human subjects or personal data and is not expected to introduce significant ethical or societal risks beyond those common to general purpose optimization technologies.
Get this paper in your agent:
hf papers read 2605.00691 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper