Author pages are created from data sourced from our academic publisher partnerships and public sources.
Share This Author
Relation Classification via Convolutional Deep Neural Network
This paper exploits a convolutional deep neural network (DNN) to extract lexical and sentence level features from the output of pre-existing natural language processing systems and significantly outperforms the state-of-the-art methods.
Distant Supervision for Relation Extraction via Piecewise Convolutional Neural Networks
This paper proposes a novel model dubbed the Piecewise Convolutional Neural Networks (PCNNs) with multi-instance learning to address the problem of wrong label problem when using distant supervision for relation extraction and adopts convolutional architecture with piecewise max pooling to automatically learn relevant features.
Recurrent Convolutional Neural Networks for Text Classification
A recurrent convolutional neural network is introduced for text classification without human-designed features to capture contextual information as far as possible when learning word representations, which may introduce considerably less noise compared to traditional window-based neural networks.
Knowledge Graph Embedding via Dynamic Mapping Matrix
A more fine-grained model named TransD, which is an improvement of TransR/CTransR, which not only considers the diversity of relations, but also entities, which makes it can be applied on large scale graphs.
Event Extraction via Dynamic Multi-Pooling Convolutional Neural Networks
A word-representation model to capture meaningful semantic regularities for words and a framework based on a convolutional neural network to capture sentence-level clues are introduced.
Extracting Relational Facts by an End-to-End Neural Model with Copy Mechanism
This paper proposes an end-to-end model based on sequence- to-sequence learning with copy mechanism, which can jointly extract relational facts from sentences of any of these classes, including Normal, EntityPairOverlap and SingleEntiyOverlap.
Collective entity linking in web text: a graph-based method
Experimental results show that the proposed graph-based collective EL method can achieve significant performance improvement over the traditional EL methods, and the purely collective nature of the inference algorithm, in which evidence for related EL decisions can be reinforced into high-probability decisions.
Learning to Represent Knowledge Graphs with Gaussian Embedding
The experimental results demonstrate that the KG2E method can effectively model the (un)certainties of entities and relations in a KG, and it significantly outperforms state-of-the-art methods (including TransH and TransR).
Inner Attention based Recurrent Neural Networks for Answer Selection
This work presents three new RNN models that add attention information before RNN hidden representation, which shows advantage in representing sentence and achieves new state-of-art results in answer selection task.
Knowledge Graph Completion with Adaptive Sparse Transfer Matrix
Experimental results show that TranSparse outperforms Trans(E, H, R, and D) significantly, and achieves state-of-the-art performance on triplet classification and link prediction tasks.