Enhanced LSTM for Natural Language Inference
- Qian Chen, Xiao-Dan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, D. Inkpen
- Computer ScienceAnnual Meeting of the Association for…
- 20 September 2016
This paper presents a new state-of-the-art result, achieving the accuracy of 88.6% on the Stanford Natural Language Inference Dataset, and demonstrates that carefully designing sequential inference models based on chain LSTMs can outperform all previous models.
Neural Natural Language Inference Models Enhanced with External Knowledge
- Qian Chen, Xiao-Dan Zhu, Zhenhua Ling, D. Inkpen, Si Wei
- Computer ScienceAnnual Meeting of the Association for…
- 12 November 2017
This paper enrichs the state-of-the-art neural natural language inference models with external knowledge and demonstrates that the proposed models improve neural NLI models to achieve the state of theart performance on the SNLI and MultiNLI datasets.
Enhancing and Combining Sequential and Tree LSTM for Natural Language Inference
- Qian Chen, Xiao-Dan Zhu, Zhenhua Ling, Si Wei, Hui Jiang
- Computer ScienceArXiv
- 20 September 2016
This paper presents a new state-of-the-art result, achieving the accuracy of 88.3% on the standard benchmark, the Stanford Natural Language Inference dataset, through an enhanced sequential encoding model, which outperforms the previous best model that employs more complicated network architectures.
Speaker-Aware BERT for Multi-Turn Response Selection in Retrieval-Based Chatbots
- Jia-Chen Gu, Tianda Li, Si Wei
- Computer ScienceInternational Conference on Information and…
- 7 April 2020
A new model, named Speaker-Aware BERT (SA-BERT), is proposed in order to make the model aware of the speaker change information, which is an important and intrinsic property of multi-turn dialogues and a speaker-aware disentanglement strategy is proposed to tackle the entangled dialogues.
Recurrent Neural Network-Based Sentence Encoder with Gated Attention for Natural Language Inference
- Qian Chen, Xiao-Dan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, D. Inkpen
- Computer ScienceRepEval@EMNLP
- 4 August 2017
This paper describes a model (alpha) that is ranked among the top in the Shared Task, on both the in- domain test set and on the cross-domain test set, demonstrating that the model generalizes well to theCross-domain data.
Learning Semantic Word Embeddings based on Ordinal Knowledge Constraints
- QUAN LIU, Hui Jiang, Si Wei, Zhenhua Ling, Yu Hu
- Computer ScienceAnnual Meeting of the Association for…
- 1 July 2015
Under this framework, semantic knowledge is represented as many ordinal ranking inequalities and the learning of semantic word embeddings (SWE) is formulated as a constrained optimization problem, where the data-derived objective function is optimized subject to all ordinal knowledge inequality constraints extracted from available knowledge resources.
Learning Latent Representations for Style Control and Transfer in End-to-end Speech Synthesis
- Ya-Jie Zhang, Shifeng Pan, Lei He, Zhenhua Ling
- Computer ScienceIEEE International Conference on Acoustics…
- 11 December 2018
The Variational Autoencoder (VAE) is introduced to an end-to-end speech synthesis model, to learn the latent representation of speaking styles in an unsupervised manner and shows good properties such as disentangling, scaling, and combination.
ASVspoof 2019: A large-scale public database of synthesized, converted and replayed speech
- Xin Wang, J. Yamagishi, Zhenhua Ling
- Computer ScienceComputer Speech and Language
- 5 November 2019
WaveNet Vocoder with Limited Training Data for Voice Conversion
- Li-Juan Liu, Zhenhua Ling, Yuan Jiang, M. Zhou, Lirong Dai
- Computer ScienceInterspeech
- 2 September 2018
Experimental results show that the WaveNet vocoders built using the proposed method outperform conventional STRAIGHT vocoder, and the system achieves an average naturalness MOS of 4.13 in VCC 2018, which is the highest among all submitted systems.
Distant Supervision Relation Extraction with Intra-Bag and Inter-Bag Attentions
- Zhiquan Ye, Zhenhua Ling
- Computer ScienceNorth American Chapter of the Association for…
- 30 March 2019
This paper presents a neural relation extraction method to deal with the noisy training data generated by distant supervision and achieves better relation extraction accuracy than state-of-the-art methods on this dataset.
...
...