• Corpus ID: 221136093

The Impact of Label Noise on a Music Tagger

@article{Prinz2020TheIO,
  title={The Impact of Label Noise on a Music Tagger},
  author={Katharina Prinz and Arthur Flexer and Gerhard Widmer},
  journal={ArXiv},
  year={2020},
  volume={abs/2008.06273}
}
We explore how much can be learned from noisy labels in audio music tagging. Our experiments show that carefully annotated labels result in highest figures of merit, but even high amounts of noisy labels contain enough information for successful learning. Artificial corruption of curated data allows us to quantize this contribution of noisy labels. 

Tables from this paper

References

SHOWING 1-10 OF 12 REFERENCES
WEAK MULTI-LABEL AUDIO-TAGGING WITH CLASS NOISE
TLDR
Task 2 of the DCASE2019 Challenge, titled "Audio tagging with noisy labels and minimal supervision", tried to answer the question whether such data can be incorporated into an audio-tagging learning process in a meaningful manner.
Learning Sound Event Classifiers from Web Audio with Noisy Labels
TLDR
Experiments suggest that training with large amounts of noisy data can outperform training with smaller amounts of carefully-labeled data, and it is shown that noise-robust loss functions can be effective in improving performance in presence of corrupted labels.
Learning from Noisy Labels with Distillation
TLDR
This work proposes a unified distillation framework to use “side” information, including a small clean dataset and label relations in knowledge graph, to “hedge the risk” of learning from noisy labels, and proposes a suite of new benchmark datasets to evaluate this task in Sports, Species and Artifacts domains.
Audio tagging with noisy labels and minimal supervision
TLDR
This paper presents the task setup, the FSDKaggle2019 dataset prepared for this scientific evaluation, and a baseline system consisting of a convolutional neural network.
A Closer Look at Weak Label Learning for Audio Events
TLDR
This work describes a CNN based approach for weakly supervised training of audio events and describes important characteristics, which naturally arise inweakly supervised learning of sound events, and shows how these aspects of weak labels affect the generalization of models.
AUDIO TAGGING WITH CONVOLUTIONAL NEURAL NETWORKS TRAINED WITH NOISY DATA Technical Report
TLDR
An ensemble that provides us with the likelihood of 80 different labels being present in an input audio clip is obtained by averaging over the predictions of all five networks, and reaches a Label Weighted Label Ranking Average Precision of 0.722.
Classification with Noisy Labels by Importance Reweighting
  • Tongliang Liu, D. Tao
  • Computer Science
    IEEE Transactions on Pattern Analysis and Machine Intelligence
  • 2016
TLDR
It is proved that any surrogate loss function can be used for classification with noisy labels by using importance reweighting, with consistency assurance that the label noise does not ultimately hinder the search for the optimal classifier of the noise-free sample.
Learning with Noisy Labels
TLDR
The problem of binary classification in the presence of random classification noise is theoretically studied—the learner sees labels that have independently been flipped with some small probability, and methods used in practice such as biased SVM and weighted logistic regression are provably noise-tolerant.
Learning from Noisy Labels with Deep Neural Networks
TLDR
A novel way of modifying deep learning models so they can be effectively trained on data with high level of label noise is proposed, and it is shown that random images without labels can improve the classification performance.
Freesound Datasets: A Platform for the Creation of Open Audio Datasets
Comunicacio presentada al 18th International Society for Music Information Retrieval Conference celebrada a Suzhou, Xina, del 23 al 27 d'cotubre de 2017.
...
1
2
...