TY - JOUR
T1 - LCW-Net
T2 - Low-light-image-based crop and weed segmentation network using attention module in two decoders
AU - Kim, Yu Hwan
AU - Lee, Sung Jae
AU - Yun, Chaeyeong
AU - Im, Su Jin
AU - Park, Kang Ryoung
N1 - Publisher Copyright:
© 2023 The Author(s)
PY - 2023/11
Y1 - 2023/11
N2 - Crop segmentation using cameras is commonly used in large agricultural areas, but the time and duration of crop harvesting varies in large farms. Considering this situation, there is a need for low-light image-based segmentation of crop and weed images for late-time harvesting, but no prior research has considered this. As a first study on this topic, we propose a low-light image-based crop and weed segmentation network (LCW-Net) that uses an attention module in two decoders to perform only one step without restoration of low-light images. We also design a loss function to accurately segment regions of objects, crops, and weeds in low-light images to avoid training overfitting and balance the learning task for object, crop, and weed segmentation. There are no existing low-light public databases, and it is difficult to obtain ground truth segmentation information for self-collected database in low-light environments. Therefore, we experimented with converting two public databases, the crop and weed field image dataset (CWFID) and BoniRob dataset, into low-light datasets. The experimental results showed that the mean intersection of unions (mIoUs) of segmentation for crops and weeds were 0.8718 and 0.8693 for the BoniRob dataset, respectively, and 0.8337 and 0.8221 for the CWFID dataset, respectively, indicating that LCW-Net outperforms the state-of-the-art methods.
AB - Crop segmentation using cameras is commonly used in large agricultural areas, but the time and duration of crop harvesting varies in large farms. Considering this situation, there is a need for low-light image-based segmentation of crop and weed images for late-time harvesting, but no prior research has considered this. As a first study on this topic, we propose a low-light image-based crop and weed segmentation network (LCW-Net) that uses an attention module in two decoders to perform only one step without restoration of low-light images. We also design a loss function to accurately segment regions of objects, crops, and weeds in low-light images to avoid training overfitting and balance the learning task for object, crop, and weed segmentation. There are no existing low-light public databases, and it is difficult to obtain ground truth segmentation information for self-collected database in low-light environments. Therefore, we experimented with converting two public databases, the crop and weed field image dataset (CWFID) and BoniRob dataset, into low-light datasets. The experimental results showed that the mean intersection of unions (mIoUs) of segmentation for crops and weeds were 0.8718 and 0.8693 for the BoniRob dataset, respectively, and 0.8337 and 0.8221 for the CWFID dataset, respectively, indicating that LCW-Net outperforms the state-of-the-art methods.
KW - Attention module in two decoders
KW - LCW-Net
KW - Low-light image
KW - Semantic segmentation for crops and weeds
UR - http://www.scopus.com/inward/record.url?scp=85170434249&partnerID=8YFLogxK
U2 - 10.1016/j.engappai.2023.106890
DO - 10.1016/j.engappai.2023.106890
M3 - Article
AN - SCOPUS:85170434249
SN - 0952-1976
VL - 126
JO - Engineering Applications of Artificial Intelligence
JF - Engineering Applications of Artificial Intelligence
M1 - 106890
ER -