YIN, a fundamental frequency estimator for speech and music.


An algorithm is presented for the estimation of the fundamental frequency (F0) of speech or musical sounds. It is based on the well-known autocorrelation method with a number of modifications that combine to prevent errors. The algorithm has several desirable features. Error rates are about three times lower than the best competing methods, as evaluated over a database of speech recorded together with a laryngograph signal. There is no upper limit on the frequency search range, so the algorithm is suited for high-pitched voices and music. The algorithm is relatively simple and may be implemented efficiently and with low latency, and it involves few parameters that must be tuned. It is based on a signal model (periodic signal) that may be extended in several ways to handle various forms of aperiodicity that occur in particular applications. Finally, interesting parallels may be drawn with models of auditory processing.

Extracted Key Phrases

13 Figures and Tables

Citations per Year

1,337 Citations

Semantic Scholar estimates that this publication has 1,337 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@article{Cheveign2002YINAF, title={YIN, a fundamental frequency estimator for speech and music.}, author={Alain de Cheveign{\'e} and Hideki Kawahara}, journal={The Journal of the Acoustical Society of America}, year={2002}, volume={111 4}, pages={1917-30} }