Action Recognition in Videos Using Pre-Trained 2D Convolutional Neural Networks

Jun Hwa Kim, Chee Sun Won

Research output: Contribution to journalArticlepeer-review

24 Scopus citations

Abstract

A pre-trained 2D CNN (Convolutional Neural Network) can be used for the spatial stream in the two-stream CNN structure for videos, treating the representative frame selected from the video as an input. However, the CNN for the temporal stream in the two-stream CNN needs training from scratch using the optical flow frames, which demands expensive computations. In this paper, we propose to adopt a pre-trained 2D CNN for the temporal stream to avoid the optical flow computations. Specifically, three RGB frames selected at three different times in the video sequence are converted into grayscale images and are assigned to three R(red), G(green), and B(blue) channels, respectively, to form a Stacked Grayscale 3-channel Image (SG3I). Then, the pre-trained 2D CNN is fine-tuned by SG3Is for the temporal stream CNN. Therefore, only pre-trained 2D CNNs are used for both spatial and temporal streams. To learn long-range temporal motions in videos, we can use multiple SG3Is by partitioning the video shot into sub-shots and a single SG3I is generated for each sub-shot. Experimental results show that our two-stream CNN with the proposed SG3Is is about 14.6 times faster than the first version of the two-stream CNN with the optical flow, and yet achieves a similar recognition accuracy for UCF-101 and a 5.7% better result for HMDB-51.

Original languageEnglish
Article number9047853
Pages (from-to)60179-60188
Number of pages10
JournalIEEE Access
Volume8
DOIs
StatePublished - 2020

Keywords

  • action recognition
  • Convolutional neural network (CNN)
  • two-stream convolutional neural networks
  • video analysis

Fingerprint

Dive into the research topics of 'Action Recognition in Videos Using Pre-Trained 2D Convolutional Neural Networks'. Together they form a unique fingerprint.

Cite this