• Corpus ID: 221186762

Constructing a Knowledge Graph from Unstructured Documents without External Alignment

  title={Constructing a Knowledge Graph from Unstructured Documents without External Alignment},
  author={Seunghak Yu and Tianxing He and James R. Glass},
Knowledge graphs (KGs) are relevant to many NLP tasks, but building a reliable domain-specific KG is time-consuming and expensive. A number of methods for constructing KGs with minimized human intervention have been proposed, but still require a process to align into the human-annotated knowledge base. To overcome this issue, we propose a novel method to automatically construct a KG from unstructured documents that does not require external alignment and explore its use to extract desired… 

Figures and Tables from this paper

Complementing Language Embeddings with Knowledge Bases for Specific Domains

A combined approach where the embedding is seen as a model of a logical knowledge base improves its satisfaction of the knowledge base, and in turn produces better training examples by labelling previously unseen text.


This disaster knowledge graph can support applications well such as natural disaster visualization and analysis, data recommendation systems, and intelligent Q&A systems, which can further improve the intelligence of natural disaster knowledge services and is expected to promote the sharing and reuse of domain knowledge graphs to a certain extent.

The BLue Amazon Brain (BLAB): A Modular Architecture of Services about the Brazilian Maritime Territory

The current version of BLAB’s architecture is described and the challenges faced so far, such as the lack of training data and the scattered state of domain information are described, presenting a considerable challenge in the development of artificial intelligence for technical domains.



Automatic Knowledge Graph Construction: A Report on the 2019 ICDM/ICBK Contest

The participants were expected to build a model to extract knowledge represented as triplets from text data and develop a web application to visualize the triplets and awards were given to five teams.

Key-Value Memory Networks for Directly Reading Documents

This work introduces a new method, Key-Value Memory Networks, that makes reading documents more viable by utilizing different encodings in the addressing and output stages of the memory read operation.

PullNet: Open Domain Question Answering with Iterative Retrieval on Knowledge Bases and Text

PullNet is described, an integrated framework for learning what to retrieve and reasoning with this heterogeneous information to find the best answer in an open-domain question answering setting.

Knowledge vault: a web-scale approach to probabilistic knowledge fusion

The Knowledge Vault is a Web-scale probabilistic knowledge base that combines extractions from Web content (obtained via analysis of text, tabular data, page structure, and human annotations) with prior knowledge derived from existing knowledge repositories that computes calibrated probabilities of fact correctness.

Open Knowledge Enrichment for Long-tail Entities

This paper proposes a full-fledged approach to knowledge enrichment, which predicts missing properties and infers true facts of long-tail entities from the open Web and demonstrates the feasibility and superiority of the approach.

Variational Reasoning for Question Answering with Knowledge Graph

This work proposes a novel and unified deep learning architecture, and an end-to-end variational learning algorithm which can handle noise in questions, and learn multi-hop reasoning simultaneously.

ERNIE: Enhanced Language Representation with Informative Entities

This paper utilizes both large-scale textual corpora and KGs to train an enhanced language representation model (ERNIE) which can take full advantage of lexical, syntactic, and knowledge information simultaneously, and is comparable with the state-of-the-art model BERT on other common NLP tasks.

COMET: Commonsense Transformers for Automatic Knowledge Graph Construction

This investigation reveals promising results when implicit knowledge from deep pre-trained language models is transferred to generate explicit knowledge in commonsense knowledge graphs, and suggests that using generative commonsense models for automatic commonsense KB completion could soon be a plausible alternative to extractive methods.

Multi-Hop Paragraph Retrieval for Open-Domain Question Answering

A method for retrieving multiple supporting paragraphs, nested amidst a large knowledge base, which contain the necessary evidence to answer a given question, by forming a joint vector representation of both a question and a paragraph.

Learning to Retrieve Reasoning Paths over Wikipedia Graph for Question Answering

A new graph-based recurrent retrieval approach that learns to retrieve reasoning paths over the Wikipedia graph to answer multi-hop open-domain questions and achieves significant improvement in HotpotQA, outperforming the previous best model by more than 14 points.