Powiadomienia systemowe
- Sesja wygasła!
- Sesja wygasła!
Tytuł artykułu
Identyfikatory
Warianty tytułu
Języki publikacji
Abstrakty
The polymerase chain reaction (PCR) test is not only time-intensive but also a contact method that puts healthcare personnel at risk. Thus, contactless and fast detection tests are more valuable. Cough sound is an important indicator of COVID-19, and in this paper, a novel explainable scheme is developed for cough sound-based COVID-19 detection. In the presented work, the cough sound is initially segmented into overlapping parts, and each segment is labeled as the input audio, which may contain other sounds. The deep Yet Another Mobile Network (YAMNet) model is considered in this work. After labeling, the segments labeled as cough are cropped and concatenated to reconstruct the pure cough sounds. Then, four fractal dimensions (FD) calculation methods are employed to acquire the FD coefficients on the cough sound with an overlapped sliding window that forms a matrix. The constructed matrixes are then used to form the fractal dimension images. Finally, a pretrained vision transformer (ViT) model is used to classify the constructed images into COVID-19, healthy and symptomatic classes. In this work, we demonstrate the performance of the ViT on cough sound-based COVID-19, and a visual explainability of the inner workings of the ViT model is shown. Three publically available cough sound datasets, namely COUGHVID, VIRUFY, and COSWARA, are used in this study. We have obtained 98.45%, 98.15%, and 97.59% accuracy for COUGHVID, VIRUFY, and COSWARA datasets, respectively. Our developed model obtained the highest performance compared to the state-of-the-art methods and is ready to be tested in real-world applications.
Wydawca
Czasopismo
Rocznik
Tom
Strony
1066--1080
Opis fizyczny
Bibliogr. 46 poz., rys., tab., wykr.
Twórcy
autor
- King Abdulaziz University, Department of Electrical and Computer Engineering, Jeddah, Saudi Arabia
autor
- Firat University, Technology Faculty, Electrical and Electronics Engineering Department, Elazig, Turkey
autor
- Firat University, Technology Faculty, Electrical and Electronics Engineering Department, Elazig, Turkey
autor
- Firat University, Technology Faculty, Electrical and Electronics Engineering Department, Elazig, Turkey
autor
- Ngee Ann Polytechnic, Department of Electronics and Computer Engineering, Singapore
- Biomedical Engineering, School of Science and Technology, SUSS University, Singapore
- Biomedical Informatics and Medical Engineering, Asia University, Taichung, Taiwan
Bibliografia
- [1] Pandemic 2020 emergencies/diseases/novel-coronavirus2019 https://www.who.int/.
- [2] Sengur D. Investigation of the relationships of the students’ academic level and gender with Covid-19 based anxiety and protective behaviors: A data mining approach. Turkish J. Sci. Technol. 2020;15(2):93–9.
- [3] Lan L, Xu D, Ye G, Xia C, Wang S, Li Y, et al. Positive RT-PCR test results in patients recovered from COVID-19. Jama Network 2020;323(15):1502–3.
- [4] Alqudaihi KS, Aslam N, Khan IU, Almuhaideb AM, Alsunaidi SJ, Ibrahim NMAR, et al. Cough sound detection and diagnosis using artificial intelligence techniques: challenges and opportunities. IEEE Access 2021;9:102327–44.
- [5] Pahar M, Klopper M, Warren R, Niesler T. COVID-19 Cough Classification using machine learning and global smartphone recordings. Comput Biol Med 2021;135(104572):1–10.
- [6] Laguarta J, Hueto F, Subirana B. COVID-19 artificial intelligence diagnosis using only cough recordings. IEEE Open J Eng Med Biol 2020;1:275–81.
- [7] Alsabek M B, Shahin I, Hassan A. Studying the similarity of COVID-19 sounds based on correlation analysis of MFCC. In: International Conference on Communications, Computing, Cybersecurity, and Informatics (CCCI) 2008 November 3 (pp. 1-5). IEEE.
- [8] Sharma N, Krishnan P, Kumar R, Ramoji S, et al. Coswara-A database of breathing, cough, and voice sounds for COVID-19 diagnosis. In: Proceedings of the Annual Conference of the International Speech Communication Association, Interspeech 2020 October 25 (pp. 4811-4815).ISCA.
- [9] Mouawad P, Dubnov T, Dubnov S. Robust detection of COVID19 in cough sounds. SN Computer Science 2021;2(34):1–13.
- [10] Erdoğan YE, Narin A. COVID-19 detection with traditional and deep features on cough acoustic signals. Comput Biol Med 2021;136(104765):1–10.
- [11] Despotovic V, Ismael M, Cornil M, Mc Call R, et al. Detection of COVID-19 from voice, cough and breathing patterns: dataset and preliminary results. Comput Biol Med 2021;138 (104944):1–9.
- [12] Tena A, Clarià F, Solsona F. Automated detection of COVID-19 cough. Biomed. Signal Process. Control 2022;71(103175):1–11.
- [13] Kobat MA, Kivrak T, Barua PD, Tuncer T, et al. Automated COVID-19 and heart failure detection using DNA pattern technique with cough sounds. Diagnostics 1962;2021 (11):1–15.
- [14] Chang Y, Jing X, Ren Z, Schuller BW. CovNet: A transfer learning framework for automatic COVID-19 detection from crowd-sourced cough sounds. Frontiers in Digital Health 2021;3(799067):1–11.
- [15] Chowdhury NK, Kabir MA, Rahman MM, Islam SMS. Machine learning for detecting COVID-19 from cough sounds: An ensemble-based MCDM method. Comput Biol Med 2022;145 (105405):1–14.
- [16] Islam R, Abdel-Raheem E, Tarique M. A study of using cough sounds and deep neural networks for the early detection of COVID-19. Biomed Eng Adv 2022;3(100025):1–13.
- [17] Hamdi S, Oussalah M, Moussaoui A, Saidi M. Attention-based hybrid CNN-LSTM and spectral data augmentation for COVID-19 diagnosis from cough sound. J Intellig Informat Syst 2022:1–23.
- [18] Manshouri NM. Identifying COVID-19 by using spectral analysis of cough recordings: a distinctive classification study. Cogn Neurodyn 2022;16(1):239–53.
- [19] Lee GT, Nam H, Kim SH, Choi SM, Kim Y, Park YH. Deep learning based cough detection camera using enhanced features. Expert Syst Appl 2022;206(117811):1–20.
- [20] Dang T, Han J, Xia T, Spathis D, Bondareva E, et al. Exploring longitudinal cough, breath, and voice data for COVID-19 progression prediction via sequential deep learning: model development and validation. J Med Intern Res 2022: 24(6), e37004; 1-14.
- [21] Rahman T, Ibtehaz N, Khandakar A, Hossain MSA, Mekki YMS, Ezeddin M, et al. QUCoughScope: An intelligent application to Detect COVID-19 patients using cough and breath sounds. Diagnostics 2022;12(4), 920:1–12.
- [22] Ren Z, Chang Y, Bartl-Pokorny KD, Pokorny FB, Schuller BW. The acoustic dissection of cough: diving into machine listening-based COVID-19 analysis and detection. J. Voice 2022;36(6):1–14.
- [23] Sharan P. Automated discrimination of cough in audio recordings: A scoping review. Frontiers in Signal Processing 2022;2(759684):1–18.
- [24] Gabaldón-Figueira JC, Keen E, Giménez G, Orrillo V, Blavia I, et al. Acoustic surveillance of cough for detecting respiratory disease using artificial intelligence. ERJ Open Research 2022;8 (2):1–9.
- [25] Andreu-Perez J, Perez-Espinosa H, Timonet E, Kiani M, Girón-Pérez MI, et al. A generic deep learning based cough analysis system from clinically validated samples for point-of-need COVID-19 test and severity levels. IEEE Trans Serv Comput 2021;15(3):1220–32.
- [26] Zealouk O, Satori H, Hamidi M, Laaidi N, Salek A, Satori K. Analysis of COVID-19 resulting cough using formants and automatic speech recognition system. J. Voice 2021;36(5):1–8.
- [27] YAMNet, YAMNet neural network, 2021 https://github.com/ tensorflow/models/tree/master/research/audioset/yamnet.
- [28] Atila O, Şengür A. Attention guided 3D CNN-LSTM model for accurate speech based emotion recognition. Appl Acoust 2021;182(108260):1–11.
- [29] Howard AG, Menglong Z, Chen B, Kalenichenko D, Wang W, Weyand T, et al. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. Cornell University arXiv. Comput Vis Patt Recogn 2017;1704:1–9.
- [30] Gemmeke J F, Ellis D P W, Freedman D, Jansen A, Lawrence W, et al. Audio Set: An ontology and human-labeled dataset for audio events. In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2017 March 5 (pp. 776- 780).IEEE.
- [31] Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Llion Jones, Gomez A N, et al. Attention is all you need. In: 31st Conference on Neural Information Processing Systems (NIPS) 2017 December 4 (pp. 1-15). Curran Associates Inc.
- [32] Katz AJ, Thompson AH. Fractal sandstone pores: implications for conductivity and pore formation. Phys Rev Lett 1985;54 (12):1325–8.
- [33] Higuchi T. Approach to an irregular time series on the basis of the fractal theory. Phys D: Nonlinear Phenom 1988;31 (2):277–83.
- [34] Petrosian A. Kolmogorov complexity of finite sequences and recognition of different preictal EEG patterns. In: Proceedings Eighth IEEE Symposium on Computer-Based Medical Systems 1995 June 9 (pp. 212-217). IEEE.
- [35] Castiglioni P. Letter to the Editor: What is wrong in Katz’s method? Comments on:‘‘ A note on fractal dimensions of biomedical waveforms. Comput Biol Med 2010;40(11– 12):950–2.
- [36] Orlandic L, Teijeiro T, Atienza D. The COUGHVID crowdsourcing dataset, a corpus for the study of large-scale cough analysis algorithms. Sci Data 2021;8:1–10.
- [37] Chaudhari G, Jiang X, Fakhry A, Han A, Xiao J, et al. Virufy: Global applicability of crowdsourced and clinical datasets for AI detection of COVID-19 from cough. Cornell University arXiv, Sound 2011;13320:1–8.
- [38] Sharma N, Krishnan P, Kumar R, Ramoji S, Chetupalli S R, et al. Coswara-A database of breathing, cough, and voice sounds for COVID-19 diagnosis. In: Proceedings Interspeech 2020 October 25 (pp. 4811-4815). Interspeech.
- [39] Hugging Face, Fine-Tune ViT for image classification with transformers, 2021, https://huggingface.co /blog/fine-tunevit.
- [40] Deniz E, Sengür A, Kadiroğlu Z, Guo Y, et al. Transfer learning based histopathologic image classification for breast cancer detection. Health Informat Sci Syst 2018;6(18):1–7.
- [41] Kadiroğlu Z, Şengür A, Deniz E. Classification of histopathological breast cancer images with low level texture features. In: International Engineering and Natural Sciences Conference (IENSC) 2018, November 14 (pp. 1765-1772). INESEG.
- [42] Selvaraju R R, Cogswell M, Das A, Vedantam R, Parikh D, Batra D. Grad-Cam: Visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE international conference on computer vision (ICCV) 2017 October 22 (pp. 618-626). IEEE.
- [43] Jahmunah V, Ng EYK, Tan RS, Oh SL, Acharya UR. Explainable detection of myocardial infarction using deep learning models with Grad-CAM technique on ECG signals. Comput Biol Med 2022;146(105550):1–19.
- [44] Son MJ, Lee SP. COVID-19 diagnosis from crowdsourced cough sound data. Appl Sci 2022;12(4):1–12.
- [45] Xue H, Salim F D. Exploring self-supervised representation ensembles for covid-19 cough classification. In: Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining 2021 August 14 (pp. 1944-1952). KDD.
- [46] Soltanian M, Borna K. Covid-19 recognition from cough sounds using lightweight separable-quadratic convolutional network. Biomed Signal Process Control 2022;72 (103333):1–10.
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-97d5ff05-5217-4c89-8b4a-4f6d4181e2d5