PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

TL-med: A Two-stage transfer learning recognition model for medical images of COVID-19

Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
The recognition of medical images with deep learning techniques can assist physicians in clinical diagnosis, but the effectiveness of recognition models relies on massive amounts of labeled data. With the rampant development of the novel coronavirus (COVID-19) worldwide, rapid COVID-19 diagnosis has become an effective measure to combat the outbreak. However, labeled COVID-19 data are scarce. Therefore, we propose a two-stage transfer learning recognition model for medical images of COVID-19 (TL-Med) based on the concept of ‘‘generic domain-target-related domain-target domain”. First, we use the Vision Transformer (ViT) pretraining model to obtain generic features from massive heterogeneous data and then learn medical features from large-scale homogeneous data. Two-stage transfer learning uses the learned primary features and the underlying information for COVID-19 image recognition to solve the problem by which data insufficiency leads to the inability of the model to learn underlying target dataset information. The experimental results obtained on a COVID-19 dataset using the TL-Med model produce a recognition accuracy of 93.24%, which shows that the proposed method is more effective in detecting COVID-19 images than other approaches and may greatly alleviate the problem of data scarcity in this field.
Twórcy
autor
  • School of Computer Science and Engineering, Dalian Minzu University, Dalian, Liaoning, China
autor
  • School of Computer Science and Engineering, Dalian Minzu University, Dalian, Liaoning, China
autor
  • School of Computer Science and Engineering, Dalian Minzu University, Dalian, Liaoning 116600, China
autor
  • School of Computer Science and Engineering, Dalian Minzu University, Dalian, Liaoning, China
autor
  • School of Computer Science and Engineering, Dalian Minzu University, Dalian, Liaoning, China
Bibliografia
  • [1] Sohrabi C, Alsafi Z, O’neill N, Khan M, Kerwan A, Al-Jabir A, Iosifidis C, Agha R.World Health Organization declares global emergency: A review of the 2019 novel coronavirus (COVID-19). Internat J Surg 2019;76(2020):71–6.
  • [2] Păcurar C-M, Necula B-R. An analysis of COVID-19 spread based on fractal interpolation and fractal dimension. Chaos, Solitons Fractals 2020;139 110073.
  • [3] Sun J, Zhuang Z, Zheng J, Li K, Wong RL-Y, Liu D, Huang J, He J, Zhu A, Zhao J. Generation of a broadly useful model for COVID-19 pathogenesis, vaccination, and treatment. Cell 2020;182. 734–743. e735.
  • [4] Fang Y, Zhang H, Xie J, Lin M, Ying L, Pang P, et al. Sensitivity of chest CT for COVID-19: comparison to RT-PCR. Radiology 2020;296:E115–7.
  • [5] Chua F, Armstrong-James D, Desai SR, Barnett J, Kouranos V, Kon OM, et al. The role of CT in case ascertainment and management of COVID-19 pneumonia in the UK: insights from high-incidence regions. Lancet Respir Med 2020;8:438–40.
  • [6] Hu Q, Guan H, Sun Z, Huang L, Chen C, Ai T, et al. Early CT features and temporal lung changes in COVID-19 pneumonia in Wuhan, China. Eur J Radiol 2020;128 109017.
  • [7] J. Zhao, Y. Zhang, X. He, P. Xie, Covid-ct-dataset: a ct scan dataset about covid-19, arXiv preprint arXiv:2003.13865, 490(2020). https://github.com/UCSD-AI4H/COVID-CT.
  • [8] Li J, Wu Y, Shen N, Zhang J, Chen E, Sun J, et al. A fully automatic computer-aided diagnosis system for hepatocellular carcinoma using convolutional neural networks. Biocyber Biomed Eng 2020;40:238–48.
  • [9] Kurzyński M, Majak M, Żołnierek A. Multiclassifier systems applied to the computer-aided sequential medical diagnosis. Biocyber Biomed Eng 2016;36:619–25.
  • [10] Munusamy H, Karthikeyan J, Shriram G, Revathi ST, Aravindkumar S. FractalCovNet architecture for COVID-19 Chest X-ray image classification and CT-scan image segmentation,. Biocyber Biomed Eng 2021;41:1025–38.
  • [11] Kassania SH, Kassanib PH, Wesolowskic MJ, Schneidera KA, Detersa R. Automatic detection of coronavirus disease (COVID-19) in X-ray and CT images: a machine learning based approach. Biocyber Biomed Eng 2021;41:867–79.
  • [12] He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. p. 770–8.
  • [13] Huang G, Liu Z, Van Der Maaten L, Weinberger KQ. Densely connected convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. p. 4700–8.
  • [14] Heidarian S, Afshar P, Enshaei N, Naderkhani F, Rafiee MJ, Fard FB, et al. Covid-fact: A fully-automated capsule network-based framework for identification of covid-19 cases from chest ct scans. Front Artif Intell 2021;4.
  • [15] Chaudhary S, Sadbhawna S, Jakhetiya V, Subudhi BN, Baid U, Guntuku SC. Detecting covid-19 and community acquired pneumonia using chest CT scan images with deep learning. In: ICASSP 2021–2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). p. 8583–7.
  • [16] Tan M, Le Q. Efficientnet: Rethinking model scaling for convolutional neural networks. In: International Conference on Machine Learning, PMLR. p. 6105–14.
  • [17] He X, Yang X, Zhang S, Zhao J, Zhang Y, Xing E, et al. Sample-efficient deep learning for COVID-19 diagnosis based on CT scans. IEEE 2020.
  • [18] Polsinelli M, Cinque L, Placidi G. A light CNN for detecting COVID-19 from CT scans of the chest. Pattern Recogn Lett 2020;140:95–100.
  • [19] F.N. Iandola, S. Han, M.W. Moskewicz, K. Ashraf, W.J. Dally, K. Keutzer, SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and< 0.5 MB model size, arXiv preprint arXiv:1602.07360, (2016).
  • [20] Sani S, Shermeh HE. A novel algorithm for detection of COVID-19 by analysis of chest CT images using Hopfield neural network. Expert Syst Appl 2022;197 116740.
  • [21] Scarpiniti M, Ahrabi SS, Baccarelli E, Piazzo L, Momenzadeh A. A novel unsupervised approach based on the hidden features of Deep Denoising Autoencoders for COVID-19 disease detection. Expert Syst Appl 2022;192 116366.
  • [22] Pathak Y, Shukla PK, Tiwari A, Stalin S, Singh S. Deep transfer learning based classification model for COVID-19 disease. Irbm 2020.
  • [23] Rezende E, Ruppert G, Carvalho T, Ramos F, De Geus P. Malicious software classification using transfer learning of resnet-50 deep neural network. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA). IEEE; 2017. p. 1011–4.
  • [24] Jaiswal A, Gianchandani N, Singh D, Kumar V, Kaur M. Classification of the COVID-19 infected patients using DenseNet201 based deep transfer learning. J Biomol Struct Dyn 2020:1–8.
  • [25] Loey M, Manogaran G, Khalifa NEM. A deep transfer learning model with classical data augmentation and cgan to detect covid-19 from chest ct radiography digital images. Neural Comput Appl 2020:1–13.
  • [26] M. Mirza, S. Osindero, Conditional generative adversarial nets, arXiv preprint arXiv:1411.1784, (2014).
  • [27] Aslan MF, Unlersen MF, Sabanci K, Durdu A. CNN-based transfer learning–BiLSTM network: A novel approach for COVID-19 infection detection. Appl Soft Comput 2021;98106912.
  • [28] Qi X, Brown LG, Foran DJ, Nosher J, Hacihaliloglu I. Chest Xray image phase features for improved diagnosis of COVID-19 using convolutional neural network. Int J Comput Assist Radiol Surg 2021;16:197–206.
  • [29] Khishe M, Caraffini F, Kuhn S. Evolving deep learning convolutional neural networks for early COVID-19 detection in chest X-ray images. Mathematics 2021;9:1002.
  • [30] Jain G, Mittal D, Thakur D, Mittal MK. A deep learning approach to detect Covid-19 coronavirus with X-Ray images. Biocyber Biomed Eng 2020;40:1391–405.
  • [31] Abraham B, Nair MS. Computer-aided detection of COVID-19 from X-ray images using multi-CNN and Bayesnet classifier. Biocyber Biomed Eng 2020;40:1436–45.
  • [32] Zhang R, Guo Z, Sun Y, Lu Q, Xu Z, Yao Z, et al. COVID19XrayNet: a two-step transfer learning model for the COVID-19 detecting problem based on a limited number of chest X-ray images, Interdisciplinary Sciences: Computational. Life Sci 2020;12:555–65.
  • [33] Minaee S, Kafieh R, Sonka M, Yazdani S, Soufi GJ. Deep-COVID: Predicting COVID-19 from chest X-ray images using deep transfer learning. Med Image Anal 2020;65 101794.
  • [34] Hassantabar S, Ahmadi M, Sharifi A. Diagnosis and detection of infected tissue of COVID-19 patients based on lung X-ray image using convolutional neural network approaches. Chaos Solitons Fractals 2020;140 110170.
  • [35] Gupta P, Siddiqui MK, Huang X, Morales-Menendez R, Pawar H, Terashima-Marin H, et al. COVID-WideNet—A capsule network for COVID-19 detection. Appl Soft Comput 2022;108780.
  • [36] Goel T, Murugan R, Mirjalili S, Chakrabartty DK. Multi-COVIDNet: Multi-objective optimized network for COVID-19 diagnosis from chest X-ray images. Appl Soft Comput 2022;115 108250.
  • [37] Jalali SMJ, Ahmadian M, Ahmadian S, Hedjam R, Khosravi A, Nahavandi S. X-ray image based COVID-19 detection using evolutionary deep learning approach. Expert Syst Appl 2022;116942.
  • [38] Raghu M, Unterthiner T, Kornblith S, Zhang C, Dosovitskiy A. Do vision transformers see like convolutional neural networks? Adv Neural Inform Process Syst 2021;34.
  • [39] Park S, Kim G, Oh Y, Seo JB, Lee SM, Kim JH, et al. Multi-task vision transformer using low-level chest X-ray feature corpus for COVID-19 diagnosis and severity quantification. Med Image Anal 2022;75 102299.
  • [40] Li J, Yang Z, Yu Y. A medical AI diagnosis platform based on vision transformer for coronavirus. In: 2021 IEEE International Conference on Computer Science, Electronic Information Engineering and Intelligent Control Technology (CEI). IEEE; 2021. p. 246–52.
  • [41] Shome D, Kar T, Mohanty SN, Tiwari P, Muhammad K, AlTameem A, et al. Covid-transformer: Interpretable covid-19 detection using vision transformer for healthcare. Int J Environ Res Public Health 2021;18:11086.
  • [42] S. Park, G. Kim, J. Kim, B. Kim, J.C. Ye, Federated Split Vision Transformer for COVID-19CXR Diagnosis using Task-Agnostic Training, arXiv preprint arXiv:2111.01338, (2021).
  • [43] Krishnan KS, Krishnan KS. Vision transformer based COVID-19 detection using chest X-rays. In: 2021 6th international conference on signal processing, computing and control (ISPCC). IEEE; 2021. p. 644–648.
  • [44] S. Park, G. Kim, Y. Oh, J.B. Seo, S.M. Lee, J.H. Kim, S. Moon, J.-K. Lim, J.C. Ye, Vision transformer for covid-19 cxr diagnosis using chest x-ray feature corpus, arXiv preprint arXiv:2103.07055, (2021).
  • [45] Mondal AK, Bhattacharjee A, Singla P, Prathosh A. xViTCOS: explainable vision transformer based COVID-19 screening using radiography. IEEE J Transl Eng Health Med 2021;10:1–10.
  • [46] S. Park, G. Kim, Y. Oh, J.B. Seo, S.M. Lee, J.H. Kim, S. Moon, J.-K. Lim, J.C. Ye, Vision Transformer using Low-level Chest X-ray Feature Corpus for COVID-19 Diagnosis and Severity Quantification, arXiv preprint arXiv:2104.07235, (2021).
  • [47] Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, et al. Attention is all you need. Adv Neural Inform Process Syst 2017:5998–6008.
  • [48] J. Devlin, M.-W. Chang, K. Lee, K. Toutanova, Bert: Pretraining of deep bidirectional transformers for language understanding, arXiv preprint arXiv:1810.04805, (2018).
  • [49] Carion N, Massa F, Synnaeve G, Usunier N, Kirillov A, Zagoruyko S. End-to-end object detection with transformers. In: European conference on computer vision. Springer; 2020. p. 213–29.
  • [50] Z. Liu, S. Luo, W. Li, J. Lu, Y. Wu, C. Li, L. Yang, Convtransformer: A convolutional transformer network for video frame synthesis, arXiv preprint arXiv:2011.10185, (2020).
  • [51] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, et al., An Image is Worth 16x16 words: transformers for image recognition at scale, 2020, pp. arXiv:2010.11929.
  • [52] Chen M, Radford A, Child R, Wu J, Jun H, Luan D, et al. Generative pretraining from pixels. In: International conference on machine learning, PMLR. p. 1691–703.
  • [53] Wang X, Girshick R, Gupta A, He K. Non-local neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. p. 7794–803.
  • [54] D. Bahdanau, K. Cho, Y. Bengio, Neural machine translation by jointly learning to align and translate, arXiv preprint arXiv:1409.0473, (2014).
  • [55] Deng J, Dong W, Socher R, Li L-J, Li K, Fei-Fei L, et al. IEEE conference on computer vision and pattern recognition. IEEE 2009;2009:248–55.
  • [56] Sanakoyeu A, Khalidov V, McCarthy MS, Vedaldi A, Neverova N. Transferring dense pose to proximal animal classes. In: Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition. p. 5233–42.
  • [57] Oquab M, Bottou L, Laptev I, Sivic J. Learning and transferring mid-level image representations using convolutional neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. p. 1717–24.
  • [58] Duong LT, Le NH, Tran TB, Ngo VM, Nguyen PT. Detection of tuberculosis from chest X-ray images: boosting the performance with vision transformer and transfer learning. Expert Syst Appl 2021;184 115519.
  • [59] Beevi KS, Nair MS, Bindu G. Automatic mitosis detection in breast histopathology images using Convolutional Neural Network based deep transfer learning, Biocybernetics and Biomedical. Engineering 2019;39:214–23.
  • [60] Zhao W. Research on the deep learning of the small sample data based on transfer learning. AIP conference proceedings. AIP Publishing LLC; 2017.
  • [61] Cheng B, Liu M, Zhang D, Munsell BC, Shen D. Domain transfer learning for MCI conversion prediction. IEEE Trans Biomed Eng 2015;62:1805–17.
  • [62] Zhang W, Li R, Zeng T, Sun Q, Kumar S, Ye J, et al. Deep model based transfer and multi-task learning for biological image analysis. IEEE Trans Big Data 2016;6:322–33.
  • [63] Lu H, Zhang L, Cao Z, Wei W, Xian K, Shen C, et al. van den Hengel, When unsupervised domain adaptation meets tensor representations. In: Proceedings of the IEEE international conference on computer vision. p. 599–608.
  • [64] Shen C, Guo Y. Unsupervised heterogeneous domain adaptation with sparse feature transformation. In: Asian conference on machine learning, PMLR. p. 375–90.
  • [65] Saito K, Watanabe K, Ushiku Y, Harada T. Maximum classifier discrepancy for unsupervised domain adaptation. In: Proceedings of the IEEE conference on computer vision and pattern recognition. p. 3723–32.
  • [66] B. Zoph, G. Ghiasi, T.-Y. Lin, Y. Cui, H. Liu, E.D. Cubuk, Q.V. Le, Rethinking pre-training and self-training, arXiv preprint arXiv:2006.06882, (2020).
  • [67] Mishra NK, Singh P, Joshi SD. Automated detection of COVID-19 from CT scan using convolutional neural network. Biocyber Biomed Eng 2021;41:572–88.
  • [68] Rashid N, Hossain MAF, Ali M, Sukanya MI, Mahmud T, Fattah SA. AutoCovNet: Unsupervised feature learning using autoencoder and feature merging for detection of COVID-19 from chest X-ray images. Biocyber Biomed Eng 2021;41:1685–701.
  • [69] Gour M, Jain S. Automated COVID-19 detection from X-ray and CT images with stacked ensemble convolutional neural network. Biocyber Biomed Eng 2022;42:27–41.
  • [70] Panwar H, Gupta P, Siddiqui MK, Morales-Menendez R, Singh V. Application of deep learning for fast detection of COVID-19 in X-Rays using nCOVnet. Chaos Solitons Fractals 2020;138 109944.
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-20cc0891-7308-4e93-b5b5-3f2ace06efba
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.