COVID-19 detection in cough, breath and speech using deep transfer learning and bottleneck features

  title={COVID-19 detection in cough, breath and speech using deep transfer learning and bottleneck features},
  author={Madhurananda Pahar and Marisa Klopper and Robin Warren and Thomas R. Niesler},
  journal={Computers in Biology and Medicine},
  pages={105153 - 105153}

Robust Cough Feature Extraction and Classification Method for COVID-19 Cough Detection Based on Vocalization Characteristics

A time-frequency differential feature is proposed to characterize dynamic information of cough sounds in time and frequency domain and a convolutional neural network structure which is pre-trained on a large amount of unlabeled cough data is proposed for classification.

Automatic Tuberculosis and COVID-19 cough classification using deep learning

The application of deep transfer learning has improved the classifiers’ performance and makes them more robust as they generalise better over the cross-validation folds, thus it can be an excellent tool for both TB and COVID-19 screening.

Automatic Non-Invasive Cough Detection based on Accelerometer and Audio Signals

It is found that it is possible to use either acceleration or audio signals to distinguish between coughing and other activities including sneezing, throat-clearing, and movement on the bed with high accuracy.

Cough-based COVID-19 detection with audio quality clustering and confidence measure based learning

A novel, class-agnostic Conformal Prediction non-conformity measure which takes the cough sample quality into account to counteract the variance caused by limiting segmentation to just the training set is proposed.

COVID-19 respiratory sound analysis and classification using audio textures

An audio texture analysis of sounds emitted by subjects in suspicion of COVID-19 infection using time–frequency spectrograms is proposed and it is hypothesized that this textural sound analysis based on local binary patterns and local ternary patterns enables us to obtain a better classification model by discriminating both people with CO VID-19 and healthy subjects.

Gradient Boosting Machine and Efficient Combination of Features for Speech-Based Detection of COVID-19

The present paper proposes a novel speech-based respiratory disease detection scheme for COVID-19 and Asthma using the Gradient Boosting Machine-based classifier that provides a quick, cost-effective, reliable, and non-invasive potential alternative detection option for COIDs in the ongoing pandemic scenario.

Sounds of COVID-19: exploring realistic performance of audio-based digital testing

It is found that an unrealistic experimental setting can result in misleading, sometimes over-optimistic, performance and be reported complete and reliable results on crowd-sourced data, which would allow medical professionals and policy makers to accurately assess the value of this technology and facilitate its deployment.

Audio texture analysis of COVID-19 cough, breath, and speech sounds

The Use of Audio Signals for Detecting COVID-19: A Systematic Review

The analysis of the extracted features showed that Mel-frequency cepstral coefficients and zero-crossing rate continue to be the most popular choices and convolutional neural networks and support vector machine algorithms were the best-performing methods.



COVID-19 Artificial Intelligence Diagnosis Using Only Cough Recordings

An AI speech processing framework that leverages acoustic biomarker feature extractors to pre-screen for COVID-19 from cough recordings, and provide a personalized patient saliency map to longitudinally monitor patients in real-time, non-invasively, and at essentially zero variable cost is developed.

A Comparative Study of Features for Acoustic Cough Detection Using Deep Architectures*

  • Igor MirandaA. DiaconT. Niesler
  • Computer Science
    2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)
  • 2019
Although MFCC performance is improved by sinusoidal liftering, STFT and MFB lead to better results, an improvement exceeding 7% in the area under the receiver operating characteristic curve across all classifiers is achieved.

End-to-end convolutional neural network enables COVID-19 detection from breath and cough audio: a pilot study

It is shown that a deep neural network based model can be trained to detect symptomatic and asymptomatic COVID-19 cases using breath and cough audio recordings, and motivates a comprehensive follow-up research study on a wider data sample, given the evident advantages of a low-cost, highly scalable digital CO VID-19 diagnostic tool.

Deep Neural Network Based Cough Detection Using Bed-Mounted Accelerometer Measurements

It is concluded that high-accuracy cough monitoring based only on measurements from the accelerometer in a consumer smartphone is possible and may represent a more convenient and readily accepted method of long-term patient cough monitoring.

Rapid and Scalable COVID-19 Screening using Speech, Breath, and Cough Recordings

A simple method can be applied to analyze sounds that can be deployed in a system to unobtrusively detect COVID-19, and shows promise in future deployment of a rapid screening tool using speech recordings as the world moves to contain future outbreaks and accelerate vaccination efforts.

COVID-19 cough classification using machine learning and global smartphone recordings

Exploring Automatic Diagnosis of COVID-19 from Crowdsourced Respiratory Sound Data

The results show that even a simple binary machine learning classifier is able to classify correctly healthy and COVID-19 sounds, and opens the door to further investigation of how automatically analysed respiratory patterns could be used as pre-screening signals to aid CO VID-19 diagnosis.

Coswara - A Database of Breathing, Cough, and Voice Sounds for COVID-19 Diagnosis

The COVID-19 pandemic presents global challenges transcending boundaries of country, race, religion, and economy. The current gold standard method for COVID-19 detection is the reverse transcription

Studying the Similarity of COVID-19 Sounds based on Correlation Analysis of MFCC

The importance of speech signal processing in the extraction of the Mel-Frequency Cepstral Coefficients (MFCCs) of the COVID-19 and non-CO VID-19 samples is illustrated and their relationship using Pearson's correlation coefficients is found.

DiCOVA Challenge: Dataset, task, and baseline system for COVID-19 diagnosis using acoustics

The challenge features two tracks, one focusing on cough sounds, and the other on using a collection of breath, sustained vowel phonation, and number counting speech recordings, and a baseline system for the task is presented.