Traffic Accident Detection Using Background Subtraction and CNN Encoder–Transformer Decoder in Video Frames

Yihang Zhang, Yunsick Sung

Research output: Contribution to journalArticlepeer-review

6 Scopus citations

Abstract

Artificial intelligence plays a significant role in traffic-accident detection. Traffic accidents involve a cascade of inadvertent events, making traditional detection approaches challenging. For instance, Convolutional Neural Network (CNN)-based approaches cannot analyze temporal relationships among objects, and Recurrent Neural Network (RNN)-based approaches suffer from low processing speeds and cannot detect traffic accidents simultaneously across multiple frames. Furthermore, these networks dismiss background interference in input video frames. This paper proposes a framework that begins by subtracting the background based on You Only Look Once (YOLOv5), which adaptively reduces background interference when detecting objects. Subsequently, the CNN encoder and Transformer decoder are combined into an end-to-end model to extract the spatial and temporal features between different time points, allowing for a parallel analysis between input video frames. The proposed framework was evaluated on the Car Crash Dataset through a series of comparison and ablation experiments. Our framework was benchmarked against three accident-detection models to evaluate its effectiveness, and the proposed framework demonstrated a superior accuracy of approximately 96%. The results of the ablation experiments indicate that when background subtraction was not incorporated into the proposed framework, the values of all evaluation indicators decreased by approximately 3%.

Original languageEnglish
Article number2884
JournalMathematics
Volume11
Issue number13
DOIs
StatePublished - Jul 2023

Keywords

  • artificial intelligence
  • background subtraction
  • CNN encoder
  • deep learning
  • traffic-accident detection
  • Transformer decoder

Fingerprint

Dive into the research topics of 'Traffic Accident Detection Using Background Subtraction and CNN Encoder–Transformer Decoder in Video Frames'. Together they form a unique fingerprint.

Cite this