TY - JOUR
T1 - Modified Perceptual Cycle Generative Adversarial Network-Based Image Enhancement for Improving Accuracy of Low Light Image Segmentation
AU - Cho, Se Woon
AU - Baek, Na Rae
AU - Koo, Ja Hyung
AU - Park, Kang Ryoung
N1 - Publisher Copyright:
© 2013 IEEE.
PY - 2021
Y1 - 2021
N2 - In recent years, the importance of the semantic segmentation field has been increasingly emphasized because autonomous vehicle and artificial intelligence (AI)-based robot technology are being researched extensively; and methods for accurately recognizing objects are required. Previous state-of-the-art segmentation methods have been proven to be effective for databases obtained during daytime. However, in extremely low light or nighttime environments, the shape and color information of objects are very small or disappear due to an insufficient amount of external light, which makes it difficult to train the segmentation network and significantly degrades performance. In our previous work, segmentation performance in a low light environment was improved using the enhancement-based segmentation method. However, low light images could not be restored precisely and segmentation performance improvement was limited because only per-pixel loss functions were used when training the enhancement network. To overcome these drawbacks, we propose a low light image segmentation method based on a modified perceptual cycle generative adversarial network (CycleGAN). Perceptual image enhancement was performed using our network, which significantly improved segmentation performance. Unlike the existing perceptual loss, the Euclidean distance of the feature maps extracted from the pretrained segmentation network was used. In our experiments, we used low light databases generated from two famous road scene open databases, which are Cambridge-driving Labeled Video Database (CamVid) and Karlsruhe Institute of Technology and Toyota Technological Institute at Chicago (KITTI), and confirmed that our proposed method shows better segmentation performance in extremely low light environments than the existing state-of-the art methods.
AB - In recent years, the importance of the semantic segmentation field has been increasingly emphasized because autonomous vehicle and artificial intelligence (AI)-based robot technology are being researched extensively; and methods for accurately recognizing objects are required. Previous state-of-the-art segmentation methods have been proven to be effective for databases obtained during daytime. However, in extremely low light or nighttime environments, the shape and color information of objects are very small or disappear due to an insufficient amount of external light, which makes it difficult to train the segmentation network and significantly degrades performance. In our previous work, segmentation performance in a low light environment was improved using the enhancement-based segmentation method. However, low light images could not be restored precisely and segmentation performance improvement was limited because only per-pixel loss functions were used when training the enhancement network. To overcome these drawbacks, we propose a low light image segmentation method based on a modified perceptual cycle generative adversarial network (CycleGAN). Perceptual image enhancement was performed using our network, which significantly improved segmentation performance. Unlike the existing perceptual loss, the Euclidean distance of the feature maps extracted from the pretrained segmentation network was used. In our experiments, we used low light databases generated from two famous road scene open databases, which are Cambridge-driving Labeled Video Database (CamVid) and Karlsruhe Institute of Technology and Toyota Technological Institute at Chicago (KITTI), and confirmed that our proposed method shows better segmentation performance in extremely low light environments than the existing state-of-the art methods.
KW - low light
KW - modified perceptual CycleGAN
KW - perceptual loss
KW - road scene open database
KW - Semantic segmentation
UR - http://www.scopus.com/inward/record.url?scp=85099496137&partnerID=8YFLogxK
U2 - 10.1109/ACCESS.2020.3048366
DO - 10.1109/ACCESS.2020.3048366
M3 - Article
AN - SCOPUS:85099496137
SN - 2169-3536
VL - 9
SP - 6296
EP - 6324
JO - IEEE Access
JF - IEEE Access
M1 - 9311609
ER -