Reproducibility Companion Paper: On Learning Disentangled Representation for Acoustic Event Detection

@article{Gao2021ReproducibilityCP,
  title={Reproducibility Companion Paper: On Learning Disentangled Representation for Acoustic Event Detection},
  author={Lijian Gao and Qirong Mao and Jingjing Chen and M. Dong and Ratna Babu Chinnam and Lucile Sassatelli and Miguel Fabi{\'a}n Romero Rond{\'o}n and Ujjwal Sharma},
  journal={Proceedings of the 29th ACM International Conference on Multimedia},
  year={2021}
}
This companion paper is provided to describe the major experiments reported in our paper "On Learning Disentangled Representation for Acoustic Event Detection" published in ACM Multimedia 2019. To make the replication of our work easier, we first give an introduction of the computing environment where all of our experiments are conducted. Furthermore, we provide an environmental configuration file to setup the compiling environment and other artifacts including the source code, datasets and the… 

Figures from this paper

References

SHOWING 1-4 OF 4 REFERENCES

On Learning Disentangled Representation for Acoustic Event Detection

A supervised β-VAE model for AED, which adds a novel event-specific disentangling loss in the objective function of disentangled learning, which has great success in challenging AED tasks with a large variety of events and imbalanced data.

Detection and Classification of Acoustic Scenes and Events

The state of the art in automatically classifying audio scenes, and automatically detecting and classifyingaudio events is reported on.

DCASE2017 Challenge Setup: Tasks, Datasets and Baseline System

This paper presents the setup of these tasks: task definition, dataset, experimental setup, and baseline system results on the development dataset.

Freesound technical demo

This demo wants to introduce Freesound to the multimedia community and show its potential as a research resource.