TGSNet: Multi-Field Feature Fusion for Glass Region Segmentation Using Transformers

Xiaohang Hu, Rui Gao, Seungjun Yang, Kyungeun Cho

Research output: Contribution to journalArticlepeer-review

5 Scopus citations

Abstract

Glass is a common object in living environments, but detecting it can be difficult because of the reflection and refraction of various colors of light in different environments; even humans are sometimes unable to detect glass. Currently, many methods are used to detect glass, but most rely on other sensors, which are costly and have difficulty collecting data. This study aims to solve the problem of detecting glass regions in a single RGB image by concatenating contextual features from multiple receptive fields and proposing a new enhanced feature fusion algorithm. To do this, we first construct a contextual attention module to extract backbone features through a self-attention approach. We then propose a VIT-based deep semantic segmentation architecture called MFT, which associates multilevel receptive field features and retains the feature information captured by each level of features. It is shown experimentally that our proposed method performs better on existing glass detection datasets than several state-of-the-art glass detection and transparent object detection methods, which fully demonstrates the better performance of our TGSNet.

Original languageEnglish
Article number843
JournalMathematics
Volume11
Issue number4
DOIs
StatePublished - Feb 2023

Keywords

  • feature fusion algorithm
  • glass detection
  • image classification
  • transformer

Fingerprint

Dive into the research topics of 'TGSNet: Multi-Field Feature Fusion for Glass Region Segmentation Using Transformers'. Together they form a unique fingerprint.

Cite this