Abstract
Given that robots provide services in any locations after they move toward humans, the pervasive sensing environment can provide diverse kinds of services through the robots not depending on the locations of humans. For various services, robots need to learn accurate motor primitives such as walking and grabbing objects. However, learning motor primitives in a pervasive sensing environment are very time consuming. Several previous studies have considered robots learning motor primitives and interacting with humans in virtual environments. Given that a robot learns motor primitives based on observations, a disadvantage is that there is no way of defining motor primitives that cannot be observed by a robot. In this paper, we develop a novel interaction learning approach based on a virtual environment. The motor primitives are defined by manipulating a robot directly using demonstration-based learning. In addition, a robot can apply Q-learning to learn interactions with humans. In an experiment, using the proposed method, the motor primitives were generated intuitively and the amount of movement required by a virtual human in one of the experiments was reduced by about 25% after applying the generated motor primitives.
Original language | English |
---|---|
Article number | 782043 |
Journal | International Journal of Distributed Sensor Networks |
Volume | 2013 |
DOIs | |
State | Published - 2013 |