• Publications
  • Influence
Enhanced LSTM for Natural Language Inference
TLDR
A new state-of-the-art result is presented, achieving the accuracy of 88.6% on the Stanford Natural Language Inference Dataset, and it is demonstrated that carefully designing sequential inference models based on chain LSTMs can outperform all previous models.
Enhancing and Combining Sequential and Tree LSTM for Natural Language Inference
TLDR
This paper presents a new state-of-the-art result, achieving the accuracy of 88.3% on the standard benchmark, the Stanford Natural Language Inference dataset, through an enhanced sequential encoding model, which outperforms the previous best model that employs more complicated network architectures.
Neural Natural Language Inference Models Enhanced with External Knowledge
TLDR
This paper enrichs the state-of-the-art neural natural language inference models with external knowledge and demonstrates that the proposed models improve neural NLI models to achieve the state of theart performance on the SNLI and MultiNLI datasets.
Recurrent Neural Network-Based Sentence Encoder with Gated Attention for Natural Language Inference
TLDR
This paper describes a model (alpha) that is ranked among the top in the Shared Task, on both the in- domain test set and on the cross-domain test set, demonstrating that the model generalizes well to theCross-domain data.
Speaker-Aware BERT for Multi-Turn Response Selection in Retrieval-Based Chatbots
TLDR
A new model, named Speaker-Aware BERT (SA-BERT), is proposed in order to make the model aware of the speaker change information, which is an important and intrinsic property of multi-turn dialogues and a speaker-aware disentanglement strategy is proposed to tackle the entangled dialogues.
WaveNet Vocoder with Limited Training Data for Voice Conversion
TLDR
Experimental results show that the WaveNet vocoders built using the proposed method outperform conventional STRAIGHT vocoder, and the system achieves an average naturalness MOS of 4.13 in VCC 2018, which is the highest among all submitted systems.
The Voice Conversion Challenge 2018: Promoting Development of Parallel and Nonparallel Methods
TLDR
A brief summary of the state-of-the-art techniques for VC is presented, followed by a detailed explanation of the challenge tasks and the results that were obtained.
Learning Latent Representations for Style Control and Transfer in End-to-end Speech Synthesis
TLDR
The Variational Autoencoder (VAE) is introduced to an end-to-end speech synthesis model, to learn the latent representation of speaking styles in an unsupervised manner and shows good properties such as disentangling, scaling, and combination.
Learning Semantic Word Embeddings based on Ordinal Knowledge Constraints
TLDR
Under this framework, semantic knowledge is represented as many ordinal ranking inequalities and the learning of semantic word embeddings (SWE) is formulated as a constrained optimization problem, where the data-derived objective function is optimized subject to all ordinal knowledge inequality constraints extracted from available knowledge resources.
Distraction-based neural networks for modeling documents
TLDR
This paper proposes neural models to train computers not just to pay attention to specific regions and content of input documents with attention models, but also distract them to traverse between different content of a document so as to better grasp the overall meaning for summarization.
...
1
2
3
4
5
...