Integration of Q-Iearning and behavior network approach with hierarchical task network planning for dynamic environments

Research output: Contribution to journalArticlepeer-review

Abstract

The problem of automated planning by diverse virtual agents, cooperating or acting independently, in a virtual environment is commonly resolved by using hierarchical task network (HTN) planning, Qleaming, and the behavior network approach. Each agent must plan its tasks in consideration of the movements of other agents to achieve its goals. HTN planning involves decomposing goal tasks into primitive and compound tasks. However, the time required to perform this decomposition drastically increases with the number of virtual agents and with substantial changes in the environment. This can be addressed by combining HTN planning with Q-learning. However, dynamic changes in the environment can still prevent planned primitive tasks from being performed. Thus, to increase the goal achievement probability, an approach to adapt to dynamic environments is required. This paper proposes the use of the behavior network approach as well. The proposed integrated approach was applied to racing car simulation in which a virtual agent selected and executed sequential actions in real time. When comparing to the traditional HTN, me proposed method shows the result better than the traditional HTN about 142%. Therefore we could verify that me proposed method can perform primitive task considering dynamic environment.

Original languageEnglish
Pages (from-to)2079-2090
Number of pages12
JournalInformation (Japan)
Volume15
Issue number5
StatePublished - May 2012

Keywords

  • Behavior network
  • Hierarchical task network
  • Q-learning

Fingerprint

Dive into the research topics of 'Integration of Q-Iearning and behavior network approach with hierarchical task network planning for dynamic environments'. Together they form a unique fingerprint.

Cite this