CNCAN: Contrast and normal channel attention network for super-resolution image reconstruction of crops and weeds

Sung Jae Lee, Chaeyeong Yun, Su Jin Im, Kang Ryoung Park

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

Numerous studies have been performed to apply camera vision technologies in robot-based agriculture and smart farms. In particular, to obtain high accuracy, it is essential to procure high-resolution (HR) images, which requires a high-performance camera. However, due to high costs it is difficult to widely apply the camera in agricultural robots. To overcome this limitation, we propose contrast and normal channel attention network (CNCAN) for super-resolution reconstruction (SR), which is the first research for the accurate semantic segmentation of crops and weeds even with low-resolution (LR) images captured by low-cost and LR camera. Attention block and activation function that considers high frequency and contrast information of images are used in CNCAN, and the residual connection method is applied to improve the learning stability. As a result of experimenting with three open datasets, namely, Bonirob, rice seedling and weed, and crop/weed field image (CWFID) datasets, the mean intersection of union (MIOU) results of semantic segmentation for crops and weeds with SR images through CNCAN were 0.7685, 0.6346, and 0.6931 in the Bonirob, rice seedling and weed, and CWFID datasets, respectively, confirming higher accuracy than other state-of-the-art methods for SR.

Original languageEnglish
Article number109487
JournalEngineering Applications of Artificial Intelligence
Volume138
DOIs
StatePublished - Dec 2024

Keywords

  • Contrast and normal channel attention
  • Crops and weeds images
  • Low-resolution images
  • Semantic segmentation
  • Super-resolution reconstruction

Fingerprint

Dive into the research topics of 'CNCAN: Contrast and normal channel attention network for super-resolution image reconstruction of crops and weeds'. Together they form a unique fingerprint.

Cite this