Real-time Video Prediction Using GANs With Guidance Information for Time-delayed Robot Teleoperation

Kang Il Yoon, Dae Kwan Ko, Soo Chul Lim

Research output: Contribution to journalArticlepeer-review

6 Scopus citations

Abstract

A deep-learning method for real-time video prediction is proposed that overcomes delays in the transmission of visual information in teleoperation. The proposed method predicts the real-time video frame from a delayed image using guidance information (the current master position and the delayed interaction force) transmitted from the robot. To predict accurate and realistic video frames, adversarial training is introduced. The generator in the GAN is composed of image encoders, a guidance-information embedder, and prediction decoders. To create the training data set, three experimenters remotely operated robots that gripped, picked up, and moved nine objects. Numerical results and predicted images are presented, verifying that the master position and the interaction force can be used effectively to predict the current video frame. The proposed method can reduce time-delay problems in teleoperation systems.

Original languageEnglish
Pages (from-to)2387-2397
Number of pages11
JournalInternational Journal of Control, Automation and Systems
Volume21
Issue number7
DOIs
StatePublished - Jul 2023

Keywords

  • Deep learning
  • teleoperation systems
  • time-delays
  • video prediction

Fingerprint

Dive into the research topics of 'Real-time Video Prediction Using GANs With Guidance Information for Time-delayed Robot Teleoperation'. Together they form a unique fingerprint.

Cite this