COVID-Net Assistant: A Deep Learning-Driven Virtual Assistant for COVID-19 Symptom Prediction and Recommendation

  title={COVID-Net Assistant: A Deep Learning-Driven Virtual Assistant for COVID-19 Symptom Prediction and Recommendation},
  author={Peng Shi and Yuetong Wang and Saadullah Farooq Abbasi and Alexander Wong},
As the COVID-19 pandemic continues to put a significant burden on healthcare systems worldwide, there has been growing interest in finding inexpensive symptom pre-screening and recommendation methods to assist in efficiently using available medical resources such as PCR tests. In this study, we introduce the design of COVID-Net Assistant, an efficient virtual assistant designed to provide symptom prediction and recommendations for COVID-19 by analyzing users' cough recordings through deep… 

Figures and Tables from this paper



COVIDNet-CT: A Tailored Deep Convolutional Neural Network Design for Detection of COVID-19 Cases From Chest CT Images

COVIDNet-CT, a deep convolutional neural network architecture that is tailored for detection of COVID-19 cases from chest CT images via a machine-driven design exploration approach, is introduced and the model and dataset are introduced.

COVID-Net: a tailored deep convolutional neural network design for detection of COVID-19 cases from chest X-ray images

COVID-Net is introduced, a deep convolutional neural network design tailored for the detection of COVID-19 cases from chest X-ray (CXR) images that is open source and available to the general public, and COVIDx, an open access benchmark dataset comprising of 13,975 CXR images across 13,870 patient patient cases.

COVID-19 Diagnosis from Cough Acoustics using ConvNets and Data Augmentation

A deep learning approach is presented to analyze the acoustic dataset provided in Track 1 of the DiCOVA 2021 Challenge containing cough sound recordings belonging to both COVID-19 positive and negative examples, and the use of Mel Frequency Cepstral Coefficients as the input features to the proposed model is proposed.

Cough Classification for COVID-19 based on audio mfcc features using Convolutional Neural Networks

The two approaches proposed in this paper are based on mfcc features and spectrogram images as input to CNN network and MFCC approach produced 70.58% test accuracy with 81% sensitivity and is better than the spectrogram-based approach.

Coswara - A Database of Breathing, Cough, and Voice Sounds for COVID-19 Diagnosis

The COVID-19 pandemic presents global challenges transcending boundaries of country, race, religion, and economy. The current gold standard method for COVID-19 detection is the reverse transcription

NetScore: Towards Universal Metrics for Large-scale Performance Analysis of Deep Neural Networks for Practical Usage

A new balanced metric called NetScore is proposed, which is designed specifically to provide a quantitative assessment of the balance between accuracy, computational complexity, and network architecture complexity of a deep neural network.

Residual Networks Behave Like Ensembles of Relatively Shallow Networks

This work proposes a novel interpretation of residual networks showing that they can be seen as a collection of many paths of differing length, and reveals one of the key characteristics that seem to enable the training of very deep networks: Residual networks avoid the vanishing gradient problem by introducing short paths which can carry gradient throughout the extent of veryDeep networks.

GenSynth: a generative synthesis approach to learning generative machines for generate efficient neural networks

GenSynth can be a powerful, generalised approach for accelerating and improving the building of deep neural networks for on-device edge scenarios.

Deep Residual Learning for Image Recognition

This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.