A novel study to classify breath inhalation and breath exhalation using audio signals from heart and trachea
dc.contributor.author | Kavsaoglu, Ahmet Resit | |
dc.contributor.author | Sehirli, Eftal | |
dc.date.accessioned | 2024-09-29T15:55:05Z | |
dc.date.available | 2024-09-29T15:55:05Z | |
dc.date.issued | 2023 | |
dc.department | Karabük Üniversitesi | en_US |
dc.description.abstract | Respiration is a vital process for all living organisms. In the diagnosis and the detection of many health problems, patient's respiration rate, breath inhalation, and breath exhalation conditions are primarily taken into consid-eration by doctors, clinicians, and healthcare staff. In this study, an interactive application is designed to collect audio signals, present visual information about them, create a novel 21253x20 audio signal dataset for the detection of breath inhalation and breath exhalation that can be performed through nose and mouth, and classify audio signals based on machine learning (ML) models as breath inhalation and breath exhalation. Audio signals are received from both volunteers' hearts (method 1) and trachea (method 2). ML models as decision tree (DT), Naive Bayes (NB), support vector machines (SVM), k-nearest neighbor (KNN), gradient boosted trees (GBT), random forest (RF), and artificial neural network model (ANN) are used on the created dataset to classify the received audio signals from nose and mouth into two different conditions. The highest sensitivity, specificity, accuracy, and Matthews correlation coefficient (MCC) for the classification of breath inhalation and breath exhalation are respectively obtained as 91.82%, 87.20%, 89.51%, and 0.79 by method 2 based on majority voting of KNN, RF, and SVM. This paper mainly focuses on usage of audio signals and ML models as a novel approach to classify respiratory conditions based on breath inhalation and breath exhalation via an interactive application. This paper uncovers that audio signals received from method 2 are more effective and eligible to extract information than audio signals received from method 1. | en_US |
dc.identifier.doi | 10.1016/j.bspc.2022.104220 | |
dc.identifier.issn | 1746-8094 | |
dc.identifier.issn | 1746-8108 | |
dc.identifier.scopus | 2-s2.0-85140015979 | en_US |
dc.identifier.scopusquality | Q1 | en_US |
dc.identifier.uri | https://doi.org/10.1016/j.bspc.2022.104220 | |
dc.identifier.uri | https://hdl.handle.net/20.500.14619/4448 | |
dc.identifier.volume | 80 | en_US |
dc.identifier.wos | WOS:000877950400005 | en_US |
dc.identifier.wosquality | Q2 | en_US |
dc.indekslendigikaynak | Web of Science | en_US |
dc.indekslendigikaynak | Scopus | en_US |
dc.language.iso | en | en_US |
dc.publisher | Elsevier Sci Ltd | en_US |
dc.relation.ispartof | Biomedical Signal Processing and Control | en_US |
dc.relation.publicationcategory | Makale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanı | en_US |
dc.rights | info:eu-repo/semantics/closedAccess | en_US |
dc.subject | Audio signal classification | en_US |
dc.subject | Breath inhalation | en_US |
dc.subject | Breath exhalation | en_US |
dc.subject | Machine learning | en_US |
dc.title | A novel study to classify breath inhalation and breath exhalation using audio signals from heart and trachea | en_US |
dc.type | Article | en_US |