De-hazing CCTV Images using Dark Channel Prior for Improved Vehicle Detection

Ershang Tian, Juntae Kim

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

2 Scopus citations

Abstract

The recent advancements in artificial intelligence have led to significant improvements in object detection. Researchers have focused on enhancing the performance of object detection in challenging environments, as this has the potential to enhance the practical applications. Deep learning has been successful in image classification and object detection and has a wide range of applications including vehicle detection. However, vehicle detection models trained on high-quality images often struggle to perform well under adverse weather conditions, such as fog and rain. In this paper, we propose an improved vehicle detection method using a faster R-CNN with a dark channel prior (DCP). The proposed method first preprocesses the image using DCP and then performs vehicle detection on the preprocessed image using faster R-CNN. This method has been shown to improve the effectiveness of vehicle detection.

Original languageEnglish
Title of host publicationICIIT 2023 - Proceedings of 2023 8th International Conference on Intelligent Information Technology
PublisherAssociation for Computing Machinery
Pages152-156
Number of pages5
ISBN (Electronic)9781450399616
DOIs
StatePublished - 24 Feb 2023
Event8th International Conference on Intelligent Information Technology, ICIIT 2023 - Hybrid, Da Nang, Viet Nam
Duration: 24 Feb 202326 Feb 2023

Publication series

NameACM International Conference Proceeding Series

Conference

Conference8th International Conference on Intelligent Information Technology, ICIIT 2023
Country/TerritoryViet Nam
CityHybrid, Da Nang
Period24/02/2326/02/23

Keywords

  • DCP
  • Faster R-CNN
  • Vehicle Detection

Fingerprint

Dive into the research topics of 'De-hazing CCTV Images using Dark Channel Prior for Improved Vehicle Detection'. Together they form a unique fingerprint.

Cite this