A novel study to classify breath inhalation and breath exhalation using audio signals from heart and trachea

dc.contributor.authorKavsaoglu, Ahmet Resit
dc.contributor.authorSehirli, Eftal
dc.date.accessioned2024-09-29T15:55:05Z
dc.date.available2024-09-29T15:55:05Z
dc.date.issued2023
dc.departmentKarabük Üniversitesien_US
dc.description.abstractRespiration is a vital process for all living organisms. In the diagnosis and the detection of many health problems, patient's respiration rate, breath inhalation, and breath exhalation conditions are primarily taken into consid-eration by doctors, clinicians, and healthcare staff. In this study, an interactive application is designed to collect audio signals, present visual information about them, create a novel 21253x20 audio signal dataset for the detection of breath inhalation and breath exhalation that can be performed through nose and mouth, and classify audio signals based on machine learning (ML) models as breath inhalation and breath exhalation. Audio signals are received from both volunteers' hearts (method 1) and trachea (method 2). ML models as decision tree (DT), Naive Bayes (NB), support vector machines (SVM), k-nearest neighbor (KNN), gradient boosted trees (GBT), random forest (RF), and artificial neural network model (ANN) are used on the created dataset to classify the received audio signals from nose and mouth into two different conditions. The highest sensitivity, specificity, accuracy, and Matthews correlation coefficient (MCC) for the classification of breath inhalation and breath exhalation are respectively obtained as 91.82%, 87.20%, 89.51%, and 0.79 by method 2 based on majority voting of KNN, RF, and SVM. This paper mainly focuses on usage of audio signals and ML models as a novel approach to classify respiratory conditions based on breath inhalation and breath exhalation via an interactive application. This paper uncovers that audio signals received from method 2 are more effective and eligible to extract information than audio signals received from method 1.en_US
dc.identifier.doi10.1016/j.bspc.2022.104220
dc.identifier.issn1746-8094
dc.identifier.issn1746-8108
dc.identifier.scopus2-s2.0-85140015979en_US
dc.identifier.scopusqualityQ1en_US
dc.identifier.urihttps://doi.org/10.1016/j.bspc.2022.104220
dc.identifier.urihttps://hdl.handle.net/20.500.14619/4448
dc.identifier.volume80en_US
dc.identifier.wosWOS:000877950400005en_US
dc.identifier.wosqualityQ2en_US
dc.indekslendigikaynakWeb of Scienceen_US
dc.indekslendigikaynakScopusen_US
dc.language.isoenen_US
dc.publisherElsevier Sci Ltden_US
dc.relation.ispartofBiomedical Signal Processing and Controlen_US
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanıen_US
dc.rightsinfo:eu-repo/semantics/closedAccessen_US
dc.subjectAudio signal classificationen_US
dc.subjectBreath inhalationen_US
dc.subjectBreath exhalationen_US
dc.subjectMachine learningen_US
dc.titleA novel study to classify breath inhalation and breath exhalation using audio signals from heart and tracheaen_US
dc.typeArticleen_US

Dosyalar