Tytuł artykułu
Treść / Zawartość
Pełne teksty:
Identyfikatory
Warianty tytułu
Języki publikacji
Abstrakty
Breathing is a fundamental physiological process that reflects the health and condition of the body. Patterns, depth, and frequency of respiration are critical indicators of an individual’s overall health, with applications ranging from diagnosing illnesses to monitoring stress levels, physical exertion, and sleep quality.This paper investigates and implements various machine-learning techniques for the real-time detection of breath sounds using audio data captured via a computer microphone. The primary objective is to develop and compare methodologies to identify distinct breathing phases, namely inhalation, exhalation, and the silent intervals between breaths, in order to determine the most accurate, efficient, and practical approach. The study explores three approaches: 1. VGGish Model for Feature Extraction and Classification with Random Forest. 2. Spectrogram Classification Using Convolutional Neural Networks. 3. Mel-Frequency Cepstral Coefficients (MFCC) for Feature Extraction and Neural Network Classification.The experimental results show that methods 1 and 3 achieved an accuracy of 87% in the test data, while method 2 achieved an accuracy of 83%. The dataset comprised approximately 1,000 recordings of inhalations, exhalations, and silences between breaths, collected using four different microphones and recorded by three different individuals. All implementations and training data are available on a public GitHub repository: github.com/tomaszsankowski/Breathing-Classification.
Wydawca
Rocznik
Tom
Strony
1--12
Opis fizyczny
Bibliogr. 37 poz., tab., wykr.
Twórcy
autor
- Gdańsk University of Technology, ul. Narutowicza 11/12, Gdańsk, Poland
autor
- Gdańsk University of Technology, ul. Narutowicza 11/12, Gdańsk, Poland
autor
- Gdańsk University of Technology, ul. Narutowicza 11/12, Gdańsk, Poland
autor
- Gdańsk University of Technology, ul. Narutowicza 11/12, Gdańsk, Poland
Bibliografia
- [1] A. Nicolò, C. Massaroni, E. Schena, and M. Sacchetti, “The importance of respiratory rate monitoring: From healthcare to sportand exercise,”Sensors, vol. 20, no. 21, p. 6396, 2020.
- [2] S. Kesten, M. R. Maleki-Yazdi, B. R. Sanders, J. A. Wells, S. L. McKillop, K. R. Chapman, and A. S. Rebuck, “Respiratory rate during acute asthma,” Chest, vol. 97, no. 1, pp. 58–62, 1990.
- [3] G. Cinel, E. A. Tarim, and H. C. Tekin, “Wearable respiratory rate sensor technology for diagnosis of sleep apnea,” in 2020 Medical Technologies Congress (TIPTEKNO), pp. 1–4, 2020.
- [4] M. Z. Urfy and J. I. Suarez, “Chapter 17 - breathing and the nervous system,” in Neurologic Aspects of Systemic Disease Part I (J. Biller and J. M. Ferro, eds.), vol. 119 of Handbook of Clinical Neurology, pp. 241–250, Elsevier, 2014.
- [5] A. Angelucci, F. Birettoni, A. Bufalari, and A. Aliverti, “Validation of a wearable system for respiratory rate monitoring in dogs,”IEEE Access, vol. 12, pp. 80308–80316, 2024.
- [6] V. V. Tipparaju, D. Wang, J. Yu, F. Chen, F. Tsow, E. Forzani, N. Tao, and X. Xian, “Respiration pattern recognition by wearable mask device,”Biosensors and Bioelectronics, vol. 169, p. 112590, 2020.
- [7] H. Cheraghi Bidsorkhi, N. Faramarzi, B. Ali, L. R. Ballam, A. G.D’Aloia, A. Tamburrano, and M. S. Sarto, “Wearable graphene-based smart face mask for real-time human respiration monitoring,” Materials Design, vol. 230, p. 111970, 2023.
- [8] C. Romano, A. Nicolò, L. Innocenti, M. Sacchetti, E. Schena, andC. Massaroni, “Design and testing of a smart facemask for respiratory monitoring during cycling exercise,” Biosensors, vol. 13,no. 3, 2023.
- [9] P. Hung, S. Bonnet, R. Guillemaud, E. Castelli, and P. T. N. Yen,“Estimation of respiratory wave form using an accelerometer,” in 2008 5th IEEE International Symposium on Biomedical Imaging:From Nano to Macro, pp. 1493–1496, 2008.
- [10] A. Bates, M. Ling, C. Geng, A. Turk, and D. Arvind,“Accelerometer-based respiratory measurement during speech,” in 2011 International Conference on Body Sensor Networks, pp. 95–100, 2011.
- [11] A. Siqueira, A. F. Spirandeli, R. Moraes, and V. Zarzoso, “Respiratory wave form estimation from multiple accelerometers: An optimal sensor number and placement analysis,”IEEE Journal of Biomedical and Health Informatics, vol. 23, no. 4, pp. 1507–1515, 2019.
- [12] A. Kumar, V. Mitra, C. Oliver, A. Ullal, M. Biddulph, and I. Mance,“Estimating respiratory rate from breath audio obtained through wearable microphones,” in 2021 43rd Annual International Conference of the IEEE Engineering in Medicine Biology Society (EMBC), pp. 7310–7315, 2021.
- [13] A. T. Purnomo, D.-B. Lin, T. Adiprabowo, and W. F. Hendria,“Non-contact monitoring and classification of breathing pattern for the supervision of people infected by covid-19,” Sensors,vol. 21, no. 9, 2021.
- [14] M. Usman, M. Zubair, Z. Ahmad, M. Zaidi, T. Ijyas, M. Parayangat, M. Wajid, M. Shiblee, and J. A. Ali, “Heart rate detection and classification from speech spectral features using machine learning,” Archives of Acoustics, vol. 46, no. No 1, pp. 41–53, 2021.
- [15] S. Gaikwad, M. Basil, and B. Gawali, “Computerized medical disease identification using respiratory sound based on mfcc and neural network,” in Recent Trends in Image Processing and Pattern Recognition (K. C. Santosh and B. Gawali, eds.), (Singapore), pp. 70–82, Springer Singapore, 2021.
- [16] Y. Nam, B. A. Reyes, and K. H. Chon, “Estimation of respiratory rates using the built-in microphone of a smartphone or headset,” IEEE Journal of Biomedical and Health Informatics, vol. 20, no. 6, pp. 1493–1501, 2016.
- [17] K. Chon, S. Dash, and K. Ju, “Estimation of respiratory rate from photoplethysmogram data using time–frequency spectral estimation,” IEEE Transactions on Biomedical Engineering, vol. 56,no. 8, pp. 2054–2063, 2009.
- [18] L. Biedebach, M. Óskarsdóttir, E. S. Arnardóttir, S. Sigurdardóttir, M. V. Clausen, S. Sigurdardóttir, M. Serwatko, and A. S. I-lind, “Anomaly detection in sleep: detecting mouth breathing in children,” Data Mining and Knowledge Discovery, vol. 38, no. 3, pp. 976–1005, 2024.
- [19] M. Sharma and H. Singh, “Contactless methods for respiration nmonitoring and design of siw-lwa for real-time respiratory rate monitoring,” IETE Journal of Research, vol. 69, no. 11, pp. 8362–8372, 2023.
- [20] E. P. Doheny, B. P. O’Callaghan, V. S. Fahed, J. Liegey, C. Goulding, S. Ryan, and M. M. Lowery, “Estimation of respiratory rateand exhale duration using audio signals recorded by smartphone microphones,” Biomedical Signal Processing and Control, vol. 80,p. 104318, 2023.
- [21] S. Hughes, Respiratory rate monitoring devices for the acute care setting: device development and evaluation. PhD thesis, Anglia Ruskin Research Online (ARRO), 2024.
- [22] M. Ali, A. Elsayed, A. Mendez, Y. Savaria, and M. Sawan, “Contact and remote breathing rate monitoring techniques: A review,” IEEE Sensors Journal, vol. 21, no. 13, pp. 14569–14586, 2021.
- [23] T. Hussain, S. Ullah, R. Fernández-García, and I. Gil, “Wearable sensors for respiration monitoring: A review,”Sensors, vol. 23, no. 17, p. 7518, 2023.
- [24] M. I. Ansari and T. Hasan, “Spectnet: End-to-end audio signal classification using learnable spectrogram features.”
- [25] P. Rawat, M. Bajaj, S. Vats, and V. Sharma, “A comprehensive study based on mfcc and spectrogram for audio classification,” Journal of Information and Optimization Sciences, vol. 44, no. 6, pp. 1057–1074, 2023.
- [26] K. Palanisamy, D. Singhania, and A. Yao, “Rethinking cnn modelsfor audio classification,” arXiv preprint arXiv:2007.11154, 2020.
- [27] M. Lv, Z. Sun, M. Zhang, R. Geng, M. Gao, and G. Wang, “Sound recognition method for white feather broilers based on spectrogram features and the fusion classification model,” Measurement,vol. 222, p. 113696, 2023.
- [28] A. S. Podda, R. Balia, L. Pompianu, S. Carta, G. Fenu, and R. Saia,“Cargram: Cnn-based accident recognition from road sounds through intensity-projected spectrogram analysis,” Digital Signal Processing, vol. 147, p. 104431, 2024.
- [29] E. M. B. V. B. Elly C. Knight, Sergio Poo Hernandez and B. V.Tucker, “Pre-processing spectrogram parameters improve the accuracy of bioacoustic classification using convolutional neural networks,” Bioacoustics, vol. 29, no. 3, pp. 337–355, 2020.
- [30] A. Althnian, D. AlSaeed, H. Al-Baity, A. Samha, A. B. Dris, N. Alzakari, A. Abou Elwafa, and H. Kurdi, “Impact of data setsize on classification performance: An empirical evaluation in the medical domain,” Applied Sciences, vol. 11, no. 2, 2021.
- [31] M. K. Gourisaria, R. Agrawal, M. Sahni, and P. K. Singh, “Comparative analysis of audio classification with mfcc and stft featuresusing machine learning techniques,” Discover Internet of Things, vol. 4, no. 1, p. 1, 2024.
- [32] M. S. Sidhu, N. A. A. Latib, and K. K. Sidhu, “Mfcc in audio signal processing for voice disorder: a review,” Multimedia Tools and Applications, pp. 1–21, 2024.
- [33] A. Mahmood and U. Köse, “Speech recognition based on convolutional neural networks and mfcc algorithm,”Advances in Artificial Intelligence Research, vol. 1, no. 1, pp. 6–12, 2021.
- [34] R. Hidayat and A. Winursito, “A modified mfcc for improved wavelet-based denoising on robust speech recognition.,”International Journal of Intelligent Engineering & Systems, vol. 14, no. 1, 2021.
- [35] H. Zhang, Z. Zhao, F. Huang, and L. Hu, “A study of sound recognition algorithm for power plant equipment fusing mfcc and im-fcc features,” in International Conference on Image, Signal Processing, and Pattern Recognition (ISPP 2023), vol. 12707, pp. 810–816, SPIE, 2023.
- [36] N. Di, M. Z. Sharif, Z. Hu, R. Xue, and B. Yu, “Applicability of vg-gish embedding in bee colony monitoring: comparison with mfccin colony sound classification,”PeerJ, vol. 11, p. e14696, 2023.
- [37] A. Kulkarni, V. Naik, and S. Kumavat, “Insect sound recognition using mfcc and cnn,” 2023.
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-850a88dc-846b-463f-8cb1-a91541804452
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.