• Corpus ID: 231934101

Improving Deep-learning-based Semi-supervised Audio Tagging with Mixup

@article{Cances2021ImprovingDS,
  title={Improving Deep-learning-based Semi-supervised Audio Tagging with Mixup},
  author={L{\'e}o Cances and Etienne Labb'e and Thomas Pellegrini},
  journal={ArXiv},
  year={2021},
  volume={abs/2102.08183}
}
Recently, semi-supervised learning (SSL) methods, in the framework of deep learning (DL), have been shown to provide state-of-the-art results on image datasets by exploiting unlabeled data. Most of the time tested on object recognition tasks in images, these algorithms are rarely compared when applied to audio tasks. In this article, we adapted four recent SSL methods to the task of audio tagging. The first two methods, namely Deep Co-Training (DCT) and Mean Teacher (MT), involve two… 
2 Citations

Figures and Tables from this paper

Improving Semi-Supervised Learning for Audio Classification with FixMatch
Including unlabeled data in the training process of neural networks using Semi-Supervised Learning (SSL) has shown impressive results in the image domain, where state-of-the-art results were obtained
A Preliminary Study on Environmental Sound Classification Leveraging Large-Scale Pretrained Model and Semi-Supervised Learning
TLDR
To simulate a low-resource sound classification setting where only limited supervised examples are made available, the notion of transfer learning is instantiated with a recently proposed training algorithm and a data augmentation method to achieve the goal of semi-supervised model training.

References

SHOWING 1-10 OF 28 REFERENCES
Semi-Supervised Audio Classification with Consistency-Based Regularization
TLDR
This paper incorporates audio-specific perturbations into the Mean Teacher algorithm and demonstrates the effectiveness of the resulting method on audio classification tasks.
Unsupervised Data Augmentation for Consistency Training
TLDR
A new perspective on how to effectively noise unlabeled examples is presented and it is argued that the quality of noising, specifically those produced by advanced data augmentation methods, plays a crucial role in semi-supervised learning.
Temporal Ensembling for Semi-Supervised Learning
TLDR
Self-ensembling is introduced, where it is shown that this ensemble prediction can be expected to be a better predictor for the unknown labels than the output of the network at the most recent training epoch, and can thus be used as a target for training.
Deep Co-Training for Semi-Supervised Image Recognition
TLDR
This paper presents Deep Co-Training, a deep learning based method inspired by the Co- Training framework, which outperforms the previous state-of-the-art methods by a large margin in semi-supervised image recognition.
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
TLDR
This paper demonstrates the power of a simple combination of two common SSL methods: consistency regularization and pseudo-labeling, and shows that FixMatch achieves state-of-the-art performance across a variety of standard semi-supervised learning benchmarks.
Regularization With Stochastic Transformations and Perturbations for Deep Semi-Supervised Learning
TLDR
An unsupervised loss function is proposed that takes advantage of the stochastic nature of these methods and minimizes the difference between the predictions of multiple passes of a training sample through the network.
Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results
TLDR
The recently proposed Temporal Ensembling has achieved state-of-the-art results in several semi-supervised learning benchmarks, but it becomes unwieldy when learning large datasets, so Mean Teacher, a method that averages model weights instead of label predictions, is proposed.
MixMatch: A Holistic Approach to Semi-Supervised Learning
TLDR
This work unify the current dominant approaches for semi-supervised learning to produce a new algorithm, MixMatch, that works by guessing low-entropy labels for data-augmented unlabeled examples and mixing labeled and unlabeling data using MixUp.
Pseudo-Label : The Simple and Efficient Semi-Supervised Learning Method for Deep Neural Networks
TLDR
This simple and efficient method of semi-supervised learning for deep neural networks is proposed, trained in a supervised fashion with labeled and unlabeled data simultaneously and favors a low-density separation between classes.
FSD50K: An Open Dataset of Human-Labeled Sound Events
TLDR
FSD50K is introduced, an open dataset containing over 51 k audio clips totalling over 100 h of audio manually labeled using 200 classes drawn from the AudioSet Ontology, to provide an alternative benchmark dataset and thus foster SER research.
...
...