TY - JOUR
T1 - Global–local feature learning for fine-grained food classification based on Swin Transformer
AU - Kim, Jun Hwa
AU - Kim, Namho
AU - Won, Chee Sun
N1 - Publisher Copyright:
© 2024 Elsevier Ltd
PY - 2024/7
Y1 - 2024/7
N2 - Separable object parts, such as the head and tail in a bird, are vital for fine-grained visual classifications. For those objects without separable parts, the classification task relies only on local and global textural image features. Although the Swin Transformer architecture was proposed to efficiently capture both local and global visual features, it still exhibits a bias towards global features. Therefore, our goal is to enhance the local feature learning capability of the Swin Transformer by adding four new modules of the Local Feature Extraction Network (L-FEN), Convolution Patch-Merging (CP), Multi-Path (MP), and Multi-View (MV). The L-FEN enhances the Swin transformer with the improved local feature capture. The CP is a localized and hierarchical adaptation of the Swin's Patch Merging technique. The MP method integrates features across various Swin stages to accentuate local details. Meanwhile, the MV Swin transformer block supersedes traditional Swin blocks with those incorporating varied receptive fields, ensuring a broader scope of local feature capture. Our enhanced architecture, named Global–Local Swin Transformer (GL-Swin), is applied to solve a fine-grained food classification task. On three major food datasets: ISIA Food-500 UEC Food-256, and Food-101, our GL-Swin achieved accuracies of 66.75%, 85.78%, and 92.93% respectively, consistently outperforming other leading methods.
AB - Separable object parts, such as the head and tail in a bird, are vital for fine-grained visual classifications. For those objects without separable parts, the classification task relies only on local and global textural image features. Although the Swin Transformer architecture was proposed to efficiently capture both local and global visual features, it still exhibits a bias towards global features. Therefore, our goal is to enhance the local feature learning capability of the Swin Transformer by adding four new modules of the Local Feature Extraction Network (L-FEN), Convolution Patch-Merging (CP), Multi-Path (MP), and Multi-View (MV). The L-FEN enhances the Swin transformer with the improved local feature capture. The CP is a localized and hierarchical adaptation of the Swin's Patch Merging technique. The MP method integrates features across various Swin stages to accentuate local details. Meanwhile, the MV Swin transformer block supersedes traditional Swin blocks with those incorporating varied receptive fields, ensuring a broader scope of local feature capture. Our enhanced architecture, named Global–Local Swin Transformer (GL-Swin), is applied to solve a fine-grained food classification task. On three major food datasets: ISIA Food-500 UEC Food-256, and Food-101, our GL-Swin achieved accuracies of 66.75%, 85.78%, and 92.93% respectively, consistently outperforming other leading methods.
KW - CNN
KW - Deep learning
KW - Fine-grained visual classification
KW - Food dataset
KW - Vision transformer
UR - http://www.scopus.com/inward/record.url?scp=85187783530&partnerID=8YFLogxK
U2 - 10.1016/j.engappai.2024.108248
DO - 10.1016/j.engappai.2024.108248
M3 - Article
AN - SCOPUS:85187783530
SN - 0952-1976
VL - 133
JO - Engineering Applications of Artificial Intelligence
JF - Engineering Applications of Artificial Intelligence
M1 - 108248
ER -