TY - JOUR
T1 - Simulation-Based Distributed Coordination Maximization over Networks
AU - Jang, Hyeryung
AU - Shin, Jinwoo
AU - Yi, Yung
N1 - Publisher Copyright:
© 2014 IEEE.
PY - 2019/6
Y1 - 2019/6
N2 - In various online/offline multiagent networked environments, it is very popular that the system can benefit from coordinating actions of two interacting agents at some cost of coordination. In this paper, we first formulate an optimization problem that captures the amount of coordination gain at the cost of node activation over networks. This problem is challenging to solve in a distributed manner, since the target gain is a function of the long-term time portion of the intercoupled activations of two adjacent nodes and, thus, a standard Lagrange duality theory is hard to apply to obtain a distributed decomposition as in the standard network utility maximization. In this paper, we propose three simulation-based distributed algorithms, each having different update rules, all of which require only one-hop message passing and locally observed information. The key idea for being distributedness is due to a stochastic approximation method that runs a Markov chain simulation incompletely over time, but provably guarantees its convergence to the optimal solution. Next, we provide a game-theoretic framework to interpret our proposed algorithms from a different perspective. We artificially select the payoff function, where the game's Nash equilibrium is asymptotically equal to the socially optimal point, which leads to no price of anarchy. We show that two stochastically approximated variants of standard game-learning dynamics overlap with two algorithms developed from the optimization perspective. Finally, we demonstrate our theoretical findings on convergence, optimality, and further features such as a tradeoff between efficiency and convergence speed through extensive simulations.
AB - In various online/offline multiagent networked environments, it is very popular that the system can benefit from coordinating actions of two interacting agents at some cost of coordination. In this paper, we first formulate an optimization problem that captures the amount of coordination gain at the cost of node activation over networks. This problem is challenging to solve in a distributed manner, since the target gain is a function of the long-term time portion of the intercoupled activations of two adjacent nodes and, thus, a standard Lagrange duality theory is hard to apply to obtain a distributed decomposition as in the standard network utility maximization. In this paper, we propose three simulation-based distributed algorithms, each having different update rules, all of which require only one-hop message passing and locally observed information. The key idea for being distributedness is due to a stochastic approximation method that runs a Markov chain simulation incompletely over time, but provably guarantees its convergence to the optimal solution. Next, we provide a game-theoretic framework to interpret our proposed algorithms from a different perspective. We artificially select the payoff function, where the game's Nash equilibrium is asymptotically equal to the socially optimal point, which leads to no price of anarchy. We show that two stochastically approximated variants of standard game-learning dynamics overlap with two algorithms developed from the optimization perspective. Finally, we demonstrate our theoretical findings on convergence, optimality, and further features such as a tradeoff between efficiency and convergence speed through extensive simulations.
KW - Distributed algorithms/control
KW - game theory
KW - optimization
KW - stochastic approximation
UR - http://www.scopus.com/inward/record.url?scp=85054420526&partnerID=8YFLogxK
U2 - 10.1109/TCNS.2018.2873162
DO - 10.1109/TCNS.2018.2873162
M3 - Article
AN - SCOPUS:85054420526
SN - 2325-5870
VL - 6
SP - 713
EP - 726
JO - IEEE Transactions on Control of Network Systems
JF - IEEE Transactions on Control of Network Systems
IS - 2
M1 - 8476997
ER -