An Adversarial Feature Distillation Method for Audio Classification

@article{Gao2019AnAF,
  title={An Adversarial Feature Distillation Method for Audio Classification},
  author={Liang Gao and Haibo Mi and Boqing Zhu and Dawei Feng and Yicong Li and Yuxing Peng},
  journal={IEEE Access},
  year={2019},
  volume={7},
  pages={105319-105330}
}
The audio classification task aims to discriminate between different audio signal types. In this task, deep neural networks have achieved better performance than the traditional shallow architecture-based machine-learning method. However, deep neural networks often require huge computational and storage requirements that hinder the deployment in embedded devices. In this paper, we proposed a distillation method which transfers knowledge from well-trained networks to a small network, and the… 
Knowledge Distillation: A Survey
TLDR
A comprehensive survey of knowledge distillation from the perspectives of knowledge categories, training schemes, distillation algorithms and applications is provided.
CNN-Based Acoustic Scene Classification System
TLDR
A more general classification model was proposed by combining the harmonic-percussive source separation and deltas-deltadeltas features with four different models to develop a low-complexity model.
Sound Context Classification Basing on Join Learning Model and Multi-Spectrogram Features
TLDR
A deep learning framework applied for Acoustic Scene Classification (ASC), the task of classifying scene contexts from environmental input sounds, is presented and a novel join learning architecture using parallel convolutional recurrent networks is proposed, effective to learn spatial features and temporal sequences of spectrogram input.
KDnet-RUL: A Knowledge Distillation Framework to Compress Deep Neural Networks for Machine Remaining Useful Life Prediction
TLDR
A knowledge distillation framework, entitled KDnet-RUL, to compress a complex LSTM-based method for RUL prediction and demonstrates that the proposed method significantly outperforms state-of-the-art KD methods.
Predicting Respiratory Anomalies and Diseases Using Deep Learning Models
In this paper, robust deep learning frameworks are introduced, aims to detect respiratory diseases from respiratory sound inputs. The entire processes firstly begins with a front-end feature
Robust Acoustic Scene Classification to Multiple Devices Using Maximum Classifier Discrepancy and Knowledge Distillation
TLDR
The proposed robust acoustic scene classification to multiple devices using maximum classifier discrepancy (MCD) and knowledge distillation (KD) is obtained and a multi-device robust ASC model is obtained by KD.
Knowledge Distillation and Student-Teacher Learning for Visual Intelligence: A Review and New Outlooks
  • Lin Wang, Kuk-Jin Yoon
  • Computer Science
    IEEE Transactions on Pattern Analysis and Machine Intelligence
  • 2022
TLDR
This paper provides a comprehensive survey on the recent progress of KD methods together with S-T frameworks typically used for vision tasks and systematically analyzes the research status of KD in vision applications.
CNN-MoE Based Framework for Classification of Respiratory Anomalies and Lung Disease Detection
TLDR
A novel deep learning system is proposed, built on the proposed framework, which outperforms current state-of-the-art methods and applies a Teacher-Student scheme to achieve a trade-off between model performance and model complexity which holds promise for building real-time applications.

References

SHOWING 1-10 OF 38 REFERENCES
A Survey of Model Compression and Acceleration for Deep Neural Networks
TLDR
This paper survey the recent advanced techniques for compacting and accelerating CNNs model developed, roughly categorized into four schemes: parameter pruning and sharing, low-rank factorization, transferred/compact convolutional filters, and knowledge distillation.
Mixup-Based Acoustic Scene Classification Using Multi-Channel Convolutional Neural Network
TLDR
This paper explores the use of Multi-channel CNN for the classification task, which aims to extract features from different channels in an end-to-end manner, and explores the using of mixup method, which can provide higher prediction accuracy and robustness in contrast with previous models.
MEAL: Multi-Model Ensemble via Adversarial Learning
TLDR
This paper proposes an adversarial-based learning strategy where a block-wise training loss is defined to guide and optimize the predefined student network to recover the knowledge in teacher models, and to promote the discriminator network to distinguish teacher vs. student features simultaneously.
Learning and Fusing Multimodal Deep Features for Acoustic Scene Categorization
TLDR
A novel acoustic scene classification system based on multimodal deep feature fusion is proposed, where three CNNs have been presented to perform 1D raw waveform modeling, 2D time-frequency image modeling, and 3D spatial-temporal dynamics modeling, respectively.
Paraphrasing Complex Network: Network Compression via Factor Transfer
TLDR
A novel knowledge transfer method which uses convolutional operations to paraphrase teacher's knowledge and to translate it for the student and observes that the student network trained with the proposed factor transfer method outperforms the ones trained with conventional knowledge transfer methods.
Deep Residual Learning for Image Recognition
TLDR
This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1
TLDR
A binary matrix multiplication GPU kernel is written with which it is possible to run the MNIST BNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy.
Knowledge Distillation with Adversarial Samples Supporting Decision Boundary
TLDR
A new perspective based on a decision boundary is provided, which is one of the most important component of a classifier, and the proposed algorithm trains a student classifier based on the adversarial samples supporting the decision boundary.
An Analysis of Deep Neural Network Models for Practical Applications
TLDR
This work presents a comprehensive analysis of important metrics in practical applications: accuracy, memory footprint, parameters, operations count, inference time and power consumption and believes it provides a compelling set of information that helps design and engineer efficient DNNs.
Very Deep Convolutional Networks for Large-Scale Image Recognition
TLDR
This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
...
1
2
3
4
...