TY - JOUR
T1 - Advanced double layered multi-agent systems based on a3c in real-time path planning
AU - Lee, Dajeong
AU - Kim, Junoh
AU - Cho, Kyungeun
AU - Sung, Yunsick
N1 - Publisher Copyright:
© 2021 by the authors. Licensee MDPI, Basel, Switzerland.
PY - 2021/11/1
Y1 - 2021/11/1
N2 - In this paper, we propose an advanced double layered multi-agent system to reduce learning time, expressing a state space using a 2D grid. This system is based on asynchronous advantage actor-critic systems (A3C) and reduces the state space that agents need to consider by hierarchically expressing a 2D grid space and determining actions. Specifically, the state space is expressed in the upper and lower layers. Based on the learning results using A3C in the lower layer, the upper layer makes decisions without additional learning, and accordingly, the total learning time can be reduced. Our method was verified experimentally using a virtual autonomous surface vehicle simulator. It reduced the learning time required to reach a 90% goal achievement rate by 7.1% compared to the conventional double layered A3C. In addition, the goal achievement by the proposed method was 18.86% higher than that of the traditional double layered A3C over 20,000 learning episodes.
AB - In this paper, we propose an advanced double layered multi-agent system to reduce learning time, expressing a state space using a 2D grid. This system is based on asynchronous advantage actor-critic systems (A3C) and reduces the state space that agents need to consider by hierarchically expressing a 2D grid space and determining actions. Specifically, the state space is expressed in the upper and lower layers. Based on the learning results using A3C in the lower layer, the upper layer makes decisions without additional learning, and accordingly, the total learning time can be reduced. Our method was verified experimentally using a virtual autonomous surface vehicle simulator. It reduced the learning time required to reach a 90% goal achievement rate by 7.1% compared to the conventional double layered A3C. In addition, the goal achievement by the proposed method was 18.86% higher than that of the traditional double layered A3C over 20,000 learning episodes.
KW - Asynchronous advantage actor-critic
KW - Multi-agent system
KW - Simulation framework
UR - http://www.scopus.com/inward/record.url?scp=85118895771&partnerID=8YFLogxK
U2 - 10.3390/electronics10222762
DO - 10.3390/electronics10222762
M3 - Article
AN - SCOPUS:85118895771
SN - 2079-9292
VL - 10
JO - Electronics (Switzerland)
JF - Electronics (Switzerland)
IS - 22
M1 - 2762
ER -