Abstract
Teledriving could serve as a practical solution for handling unforeseen situations in autonomous driving. However, the latency of transmission networks remains a prominent concern. Despite advancements like 5G networks, delays in remote driving scenes cannot be entirely eradicated, potentially leading to unwanted incidents. While a few attempts have been made to address this issue by predicting the future driving scenes, these efforts have been restricted in their ability to accurately foresee clear and relevant driving scenarios. This study presents a method to predict a latency-free future driving scene. Unlike prior approaches, our method incorporates the command signal of a remote driver into the prediction network, as well as the past driving video frames and vehicle status. As a result, we can accurately predict relevant and clear latency-free future driving scenes. A combination of convolutional long short-term memory (ConvLSTM) and generative adversarial networks (GAN) was utilized in a deep neural network to predict the future driving scenes based on latency. The dataset was gathered from on-road teledriving experiments, with a maximum vehicle speed of 53 km/h and a driving route length of approximately 1.3 km. The dataset used to train the deep neural network was gathered from on-road teledriving experiments. The proposed method can estimate the future driving scene for up to 0.5 s, surpassing the performance of both baseline video prediction methods and a method that does not utilize the input command of the driver.
Original language | English |
---|---|
Pages (from-to) | 16676-16686 |
Number of pages | 11 |
Journal | IEEE Transactions on Intelligent Transportation Systems |
Volume | 25 |
Issue number | 11 |
DOIs | |
State | Published - 2024 |
Keywords
- autonomous vehicles
- future video prediction
- GAN
- remote driving
- Teleoperated driving
- teleoperation