Direct Speech Reconstruction From Articulatory Sensor Data by Machine Learning

@article{Gonzlez2017DirectSR,
  title={Direct Speech Reconstruction From Articulatory Sensor Data by Machine Learning},
  author={Jos{\'e} A. Gonz{\'a}lez and Lam Aun Cheah and Angel Manuel Gomez and Phil D. Green and James M. Gilbert and Stephen R. Ell and Roger K. Moore and Ed Holdsworth},
  journal={IEEE/ACM Transactions on Audio, Speech, and Language Processing},
  year={2017},
  volume={25},
  pages={2362-2374}
}
This paper describes a technique that generates speech acoustics from articulator movements. Our motivation is to help people who can no longer speak following laryngectomy, a procedure that is carried out tens of thousands of times per year in the Western world. Our method for sensing articulator movement, permanent magnetic articulography, relies on small, unobtrusive magnets attached to the lips and tongue. Changes in magnetic field caused by magnet movements are sensed and form the input to… CONTINUE READING

Figures, Tables, Results, and Topics from this paper.

Citations

Publications citing this paper.
SHOWING 1-10 OF 18 CITATIONS

References

Publications referenced by this paper.
SHOWING 1-6 OF 6 REFERENCES

Evaluation of a silent speech interface based on magnetic sensing and deep learning for a phonetically rich vocabulary

  • J. A. Gonzalez
  • Proc. Interspeech, , pp. 3986–3990.
  • 2017
Highly Influential
5 Excerpts