Encyclopedic Knowledge Patterns from Wikipedia Links

@inproceedings{Nuzzolese2011EncyclopedicKP,
  title={Encyclopedic Knowledge Patterns from Wikipedia Links},
  author={Andrea Giovanni Nuzzolese and Aldo Gangemi and Valentina Presutti and Paolo Ciancarini},
  booktitle={International Workshop on the Semantic Web},
  year={2011}
}
What is the most intuitive way of organizing concepts for describing things? What are the most relevant types of things that people use for describing other things? Wikipedia and Linked Data offer knowledge engineering researchers a chance to empirically identifying invariances in conceptual organization of knowledge i.e. knowledge patterns. In this paper, we present a resource of Encyclopedic Knowledge Patterns that have been discovered by analyizing the Wikipedia page links dataset, describe… 

Uncovering the Semantics of Wikipedia Pagelinks

This paper designed and implemented a novel method to uncover the intended semantics of pagelinks, and to represent them as semantic relations in DBpedia, and test the method on a subset of Wikipedia, showing its potential impact for DBpedia enrichment.

Aemoo: Exploratory Search based on KnowledgePatterns over the Semantic Web

Aemoo provides users with an effective summary of knowledge about an entity, including explanations that clarify its relevance, and presents it through a user-friendly interface that supports exploration of further knowledge.

Rule Mining for Semantifying Wikilinks

This paper applies rule mining techniques on the already semantied wikilinks to propose relations for the unsemantiedWikilinks in a subset of DBpedia by mining highly supported and condent logical rules from KBs, which can semantify wikilinking with very high precision.

Aemoo: Linked Data exploration based on Knowledge Patterns

A tool named Aemoo is implemented that supports EKP-driven knowledge exploration and integrates data coming from heterogeneous resources, namely static and dynamic knowledge as well as text and Linked Data.

Knowledge Pattern Extraction and Their Usage in Exploratory Search

This work wants to extract KPs by analyzing the structure of Web links from rich resources, such as Wikipedia, by developing methods for extracting KPs from the Web and at applying KPs to exploratory search tasks.

Automatic Typing of DBpedia Entities

We present Tipalo, an algorithm and tool for automatically typing DBpedia entities. Tipalo identifies the most appropriate types for an entity by interpreting its natural language definition, which

Type inference through the analysis of Wikipedia links

Two techniques that exploit wikilinks are presented, one based on induction from machine learning techniques, and the other on abduction, which suggest some new possible directions to entity classication that could be taken.

Aemoo: exploring knowledge on the web

Aemoo is a Semantic Web application supporting knowledge exploration on the Web. Through a keyword-based search interface, users can gather an effective summary of the knowledge about an entity,

Knowledge acquisition and the web

  • G. Schreiber
  • Computer Science, Linguistics
    Int. J. Hum. Comput. Stud.
  • 2013

Statistical Knowledge Patterns: Identifying Synonymous Relations in Large Linked Datasets

Statistical Knowledge Patterns (SKP) encapsulate key information about ontology classes, including synonymous properties in (and across) datasets, and are automatically generated based on statistical data analysis and can be effectively used to automatically normalise data, and hence increase recall in querying.

WikiNet: A Very Large Scale Multi-Lingual Concept Network

A multi-lingual concept network obtained automatically by mining for concepts and relations and exploiting a variety of sources of knowledge from Wikipedia is described.

Mining Meaning from Wikipedia

Extracting Lexical Semantic Knowledge from Wikipedia and Wiktionary

This paper presents two application programming interfaces for Wikipedia and Wiktionary which are especially designed for mining the rich lexical semantic information dispersed in the knowledge bases, and provide efficient and structured access to the available knowledge.

Large-Scale Taxonomy Mapping for Restructuring and Integrating Wikipedia

We present a knowledge-rich methodology for disambiguating Wikipedia categories with WordNet synsets and using this semantic information to restructure a taxonomy automatically generated from the

Towards a pattern science for the Semantic Web

The knowledge soup problem is about semantic heterogeneity, and can be considered a difficult technical issue, which needs appropriate transformation and inferential pipelines that can help making sense of the different knowledge contexts.

Acquiring Thesauri from Wikis by Exploiting Domain Models and Lexical Substitution

An innovative method for inducing thesauri from Wikipedia is presented that leverages on the Wikipedia structure to extract concepts and terms denoting them, obtaining a thesaurus that can be profitably used into applications.

Wanderlust : Extracting Semantic Relations from Natural Language Text Using Dependency Grammar Patterns

Wanderlust is presented, an algorithm that automatically extracts semantic relations from natural language text that performs in an unsupervised fashion and is not restricted to any specific type of semantic relation.

Gathering lexical linked data and knowledge patterns from FrameNet

This paper presents the experience in converting the 1.5 XML version of FrameNet into RDF datasets published on the Linked Open Data cloud, which are interoperable with WordNet and other resources.

Relation Extraction from Wikipedia Using Subtree Mining

This study addresses the problem of extracting relations among entities from Wikipedia's English articles, which in turn can serve for intelligent systems to satisfy users' information needs.

DBpedia - A crystallization point for the Web of Data