Dynamic Action Space Handling Method for Reinforcement Learning Models

Sangchul Woo, Yunsick Sung

Research output: Contribution to journalArticlepeer-review

7 Scopus citations

Abstract

Recently, extensive studies have been conducted to apply deep learning to reinforcement learning to solve the state-space problem. If the state-space problem was solved, reinforcement learning would become applicable in various fields. For example, users can utilize dance-tutorial systems to learn how to dance by watching and imitating a virtual instructor. The instructor can perform the optimal dance to the music, to which reinforcement learning is applied. In this study, we propose a method of reinforcement learning in which the action space is dynamically adjusted. Because actions that are not performed or are unlikely to be optimal are not learned, and the state space is not allocated, the learning time can be shortened, and the state space can be reduced. In an experiment, the proposed method shows results similar to those of traditional Q-learning even when the state space of the proposed method is reduced to approximately 0.33% of that of Q-learning. Consequently, the proposed method reduces the cost and time required for learning. Traditional Q-learning requires 6 million state spaces for learning 100,000 times. In contrast, the proposed method requires only 20,000 state spaces. A higher winning rate can be achieved in a shorter period of time by retrieving 20,000 state spaces instead of 6 million.

Original languageEnglish
Pages (from-to)1223-1230
Number of pages8
JournalJournal of Information Processing Systems
Volume16
Issue number5
DOIs
StatePublished - 2020

Keywords

  • Dance Tutorial System
  • Q-Learning
  • Reinforcement Learning
  • Virtual Tutor

Fingerprint

Dive into the research topics of 'Dynamic Action Space Handling Method for Reinforcement Learning Models'. Together they form a unique fingerprint.

Cite this