Detection and classification of COVID-19 by using faster R-CNN and mask R-CNN on CT images


ŞAHİN M., ULUTAŞ H., Yuce E., ERKOÇ M. F.

Neural Computing and Applications, cilt.35, sa.18, ss.13597-13611, 2023 (SCI-Expanded) identifier identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 35 Sayı: 18
  • Basım Tarihi: 2023
  • Doi Numarası: 10.1007/s00521-023-08450-y
  • Dergi Adı: Neural Computing and Applications
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Academic Search Premier, PASCAL, Applied Science & Technology Source, Biotechnology Research Abstracts, Compendex, Computer & Applied Sciences, Index Islamicus, INSPEC, zbMATH
  • Sayfa Sayıları: ss.13597-13611
  • Anahtar Kelimeler: Classification, COVID-19, CT images, Faster R-CNN, Mask R-CNN
  • Yozgat Bozok Üniversitesi Adresli: Evet

Özet

The coronavirus (COVID-19) pandemic has a devastating impact on people’s daily lives and healthcare systems. The rapid spread of this virus should be stopped by early detection of infected patients through efficient screening. Artificial intelligence techniques are used for accurate disease detection in computed tomography (CT) images. This article aims to develop a process that can accurately diagnose COVID-19 using deep learning techniques on CT images. Using CT images collected from Yozgat Bozok University, the presented method begins with the creation of an original dataset, which includes 4000 CT images. The faster R-CNN and mask R-CNN methods are presented for this purpose in order to train and test the dataset to categorize patients with COVID-19 and pneumonia infections. In this study, the results are compared using VGG-16 for faster R-CNN model and ResNet-50 and ResNet-101 backbones for mask R-CNN. The faster R-CNN model used in the study has an accuracy rate of 93.86%, and the ROI (region of interest) classification loss is 0.061 per ROI. At the conclusion of the final training, the mask R-CNN model generates mAP (mean average precision) values for ResNet-50 and ResNet-101, respectively, of 97.72% and 95.65%. The results for five folds are obtained by applying the cross-validation to the methods used. With training, our model performs better than the industry standard baselines and can help with automated COVID-19 severity quantification in CT images.