• Publications
  • Influence
Learning Entity and Relation Embeddings for Knowledge Graph Completion
TLDR
TransR is proposed to build entity and relation embeddings in separate entity space and relation spaces to build translations between projected entities and to evaluate the models on three tasks including link prediction, triple classification and relational fact extraction. Expand
Network Representation Learning with Rich Text Information
TLDR
By proving that DeepWalk, a state-of-the-art network representation method, is actually equivalent to matrix factorization (MF), this work proposes text-associated DeepWalk (TADW), which incorporates text features of vertices into network representation learning under the framework of Matrix factorization. Expand
Neural Relation Extraction with Selective Attention over Instances
TLDR
A sentence-level attention-based model for relation extraction that employs convolutional neural networks to embed the semantics of sentences and dynamically reduce the weights of those noisy instances. Expand
End-to-End Neural Ad-hoc Ranking with Kernel Pooling
TLDR
K-NRM uses a translation matrix that models word-level similarities via word embeddings, a new kernel-pooling technique that uses kernels to extract multi-level soft match features, and a learning-to-rank layer that combines those features into the final ranking score. Expand
Graph Neural Networks: A Review of Methods and Applications
TLDR
A detailed review over existing graph neural network models is provided, systematically categorize the applications, and four open problems for future research are proposed. Expand
A C-LSTM Neural Network for Text Classification
TLDR
C-LSTM is a novel and unified model for sentence representation and text classification that outperforms both CNN and LSTM and can achieve excellent performance on these tasks. Expand
Representation Learning of Knowledge Graphs with Entity Descriptions
TLDR
Experimental results on real-world datasets show that, the proposed novel RL method for knowledge graphs outperforms other baselines on the two tasks, especially under the zero-shot setting, which indicates that the method is capable of building representations for novel entities according to their descriptions. Expand
A Unified Model for Word Sense Representation and Disambiguation
TLDR
A unified model for joint word sense representation and disambiguation, which will assign distinct representations for each word sense and improves the performance of contextual word similarity compared to existing WSR methods, outperforms state-of-the-art supervised methods on domainspecific WSD, and achieves competitive performance on coarse-grained all-words WSD. Expand
Joint Learning of Character and Word Embeddings
TLDR
A character-enhanced word embedding model (CWE) is presented to address the issues of character ambiguity and non-compositional words, and the effectiveness of CWE on word relatedness computation and analogical reasoning is evaluated. Expand
FewRel: A Large-Scale Supervised Few-Shot Relation Classification Dataset with State-of-the-Art Evaluation
TLDR
Empirical results show that even the most competitive few- shot learning models struggle on this task, especially as compared with humans, and indicate that few-shot relation classification remains an open problem and still requires further research. Expand
...
1
2
3
4
5
...