Jun-Jin Huang

Learn More
Conventional spoken sentence retrieval (SSR) relies on a large-vocabulary continuous-speech recognition (LVCSR) system. This investigation proposes a feature-based speaker-dependent SSR algorithm using two-level matching. Users can speak keywords as the query inputs to get the similarity ranks from a spoken sentence database. For instance, if a user is(More)
In this paper, we propose a spoken sentence retrieval system based on MPEG-7 audio LLDs (Low-Level Descriptors). Our system retrieves the spoken sentence by a two-steps sentence matching method. First, we locate several possible segments that are similar with the user's query in spoken documents and retrieve top N from these candidate segments. Secondly, we(More)
  • 1