TY - GEN
T1 - An Attention-Based Deep Learning Model with Interpretable Patch-Weight Sharing for Diagnosing Cervical Dysplasia
AU - Chae, Jinyeong
AU - Zhang, Ying
AU - Zimmermann, Roger
AU - Kim, Dongho
AU - Kim, Jihie
N1 - Publisher Copyright:
© 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.
PY - 2022
Y1 - 2022
N2 - Diagnosis of cervical dysplasia by visual inspection is a difficult problem. Most of the recent approaches use deep learning techniques to extract features and detect a region of interest (RoI) in the image. Such approaches can lead to loss of visual detail that appears weak and local within the cervical image. Also, it requires manual annotation to extract the RoI. Moreover, there are not many labeled data due to the medical image’s characteristics. To mitigate the problem, we present an approach that extracts global and local features in the image without manual annotation when there is a shortage of data. The proposed approach is applied to classify cervix cancer, and the results are demonstrated. First of all, we divide the cervix image into nine patches to extract visual features when high-resolution images are unavailable. Second, we generate a deep learning model sharing a weight between patches of the image by considering the patch-patch and patch-image relationship. We also apply an attention mechanism to the model to train the visual features of the image and show an interpretable result. Finally, we add a loss weighting inspired by the domain knowledge to the training process, which guides the learning better while preventing overfitting. The evaluation results indicate improvements over the state-of-the-art methods in sensitivity.
AB - Diagnosis of cervical dysplasia by visual inspection is a difficult problem. Most of the recent approaches use deep learning techniques to extract features and detect a region of interest (RoI) in the image. Such approaches can lead to loss of visual detail that appears weak and local within the cervical image. Also, it requires manual annotation to extract the RoI. Moreover, there are not many labeled data due to the medical image’s characteristics. To mitigate the problem, we present an approach that extracts global and local features in the image without manual annotation when there is a shortage of data. The proposed approach is applied to classify cervix cancer, and the results are demonstrated. First of all, we divide the cervix image into nine patches to extract visual features when high-resolution images are unavailable. Second, we generate a deep learning model sharing a weight between patches of the image by considering the patch-patch and patch-image relationship. We also apply an attention mechanism to the model to train the visual features of the image and show an interpretable result. Finally, we add a loss weighting inspired by the domain knowledge to the training process, which guides the learning better while preventing overfitting. The evaluation results indicate improvements over the state-of-the-art methods in sensitivity.
KW - Attention
KW - Cervical dysplasia
KW - Deep learning model
KW - Loss weighting
KW - Patch-weight sharing
UR - http://www.scopus.com/inward/record.url?scp=85113722915&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-82199-9_43
DO - 10.1007/978-3-030-82199-9_43
M3 - Conference contribution
AN - SCOPUS:85113722915
SN - 9783030821982
T3 - Lecture Notes in Networks and Systems
SP - 634
EP - 642
BT - Intelligent Systems and Applications - Proceedings of the 2021 Intelligent Systems Conference, IntelliSys
A2 - Arai, Kohei
PB - Springer Science and Business Media Deutschland GmbH
T2 - Intelligent Systems Conference, IntelliSys 2021
Y2 - 2 September 2021 through 3 September 2021
ER -