An IoMT-Based Federated and Deep Transfer Learning Approach to the Detection of Diverse Chest Diseases Using Chest X-Rays

Barkha Kakkar, Prashant Johri, Yogesh Kumar, Hyunwoo Park, Youngdoo Son, Jana Shafi

Research output: Contribution to journalArticlepeer-review

8 Scopus citations

Abstract

Since chest illnesses are so frequent these days, it is critical to identify and diagnose them effectively. As such, this study proposes a model designed to accurately predict chest disorders by analyzing multiple chest x-ray pictures obtained from a dataset, consisting of 112,120 chest X-ray images, obtained the National Institute of Health (NIH) X-ray. The study used photos from 30,805 individuals with a total of 14 different types of chest disorder, including atelectasis, consolidation, infiltration, and pneumothorax, as well as a class called “No findings” for cases in which the ailment was undiagnosed. Six distinct transfer-learning approaches, namely, VGG-16, MobileNet V2, ResNet-50, DenseNet-161, Inception V3, and VGG-19, were used in the deep learning and federated learning environment to predict the accuracy rate of detecting chest disorders. The VGG-16 model showed the best accuracy at 0.81, with a recall rate of 0.90. As a result, the F1 score of VGG-16 is 0.85, which was higher than the F1 scores computed by other transfer learning approaches. VGG-19 obtained a maximum rate of accuracy of 97.71% via federated transfer learning. According to the classification report, the VGG-16 model is the best transfer-learning model for correctly detecting chest illness.

Original languageEnglish
Article number24
JournalHuman-centric Computing and Information Sciences
Volume12
DOIs
StatePublished - 2022

Keywords

  • Chest diseases
  • Deep learning
  • Disease prediction
  • Federated learning
  • X-ray dataset

Fingerprint

Dive into the research topics of 'An IoMT-Based Federated and Deep Transfer Learning Approach to the Detection of Diverse Chest Diseases Using Chest X-Rays'. Together they form a unique fingerprint.

Cite this