TY - JOUR
T1 - Adversarial attacks and defenses on AI in medical imaging informatics
T2 - A survey
AU - Kaviani, Sara
AU - Han, Ki Jin
AU - Sohn, Insoo
N1 - Publisher Copyright:
© 2022 Elsevier Ltd
PY - 2022/7/15
Y1 - 2022/7/15
N2 - In recent years, medical images have significantly improved and facilitated diagnosis in versatile tasks including classification of lung diseases, detection of nodules, brain tumor segmentation, and body organs recognition. On the other hand, the superior performance of machine learning (ML) techniques, specifically deep learning networks (DNNs), in various domains has lead to the application of deep learning approaches in medical image classification and segmentation. Due to the security and vital issues involved, healthcare systems are considered quite challenging and their performance accuracy is of great importance. Previous studies have shown lingering doubts about medical DNNs and their vulnerability to adversarial attacks. Although various defense methods have been proposed, there are still concerns about the application of medical deep learning approaches. This is due to some of medical imaging weaknesses, such as lack of sufficient amount of high quality images and labeled data, compared to various high-quality natural image datasets. This paper reviews recently proposed adversarial attack methods to medical imaging DNNs and defense techniques against these attacks. It also discusses different aspects of these methods and provides future directions for improving neural network's robustness.
AB - In recent years, medical images have significantly improved and facilitated diagnosis in versatile tasks including classification of lung diseases, detection of nodules, brain tumor segmentation, and body organs recognition. On the other hand, the superior performance of machine learning (ML) techniques, specifically deep learning networks (DNNs), in various domains has lead to the application of deep learning approaches in medical image classification and segmentation. Due to the security and vital issues involved, healthcare systems are considered quite challenging and their performance accuracy is of great importance. Previous studies have shown lingering doubts about medical DNNs and their vulnerability to adversarial attacks. Although various defense methods have been proposed, there are still concerns about the application of medical deep learning approaches. This is due to some of medical imaging weaknesses, such as lack of sufficient amount of high quality images and labeled data, compared to various high-quality natural image datasets. This paper reviews recently proposed adversarial attack methods to medical imaging DNNs and defense techniques against these attacks. It also discusses different aspects of these methods and provides future directions for improving neural network's robustness.
KW - Artificial neural networks
KW - Complex systems
KW - Optimization
UR - http://www.scopus.com/inward/record.url?scp=85126652952&partnerID=8YFLogxK
U2 - 10.1016/j.eswa.2022.116815
DO - 10.1016/j.eswa.2022.116815
M3 - Review article
AN - SCOPUS:85126652952
SN - 0957-4174
VL - 198
JO - Expert Systems with Applications
JF - Expert Systems with Applications
M1 - 116815
ER -