Automatic music tagging via PARAFAC2

@article{Panagakis2011AutomaticMT,
  title={Automatic music tagging via PARAFAC2},
  author={Yannis Panagakis and Constantine Kotropoulos},
  journal={2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
  year={2011},
  pages={481-484}
}
Automatic music tagging is addressed by resorting to auditory temporal modulations and Parallel Factor Analysis 2 (PARAFAC2). The starting point is to represent each music recording by its auditory temporal modulations. Then, an irregular third order tensor is formed. The first slice contains the vectorized training temporal modulations, while the second slice contains the corresponding multi-label vectors. The PARAFAC2 is employed to effectively harness the multi-label information for… CONTINUE READING

References

Publications referenced by this paper.
Showing 1-10 of 15 references

Automatic tagging of audio: The state-of-the-art

  • T. Bertin-Mahieux, D. Eck, M. Mandel
  • Machine Audition: Principles, Algorithms and…
  • 2010
Highly Influential
6 Excerpts

Similar Papers

Loading similar papers…