Semantic Wikipedia

@inproceedings{Vlkel2006SemanticW,
  title={Semantic Wikipedia},
  author={Max V{\"o}lkel and Markus Kr{\"o}tzsch and Denny Vrande{\vc}i{\'c} and Heiko Haller and Rudi Studer},
  booktitle={WWW '06},
  year={2006}
}
Wikipedia is the world's largest collaboratively edited source of encyclopaedic knowledge. But in spite of its utility, its contents are barely machine-interpretable. Structural knowledge, e.,g. about how concepts are interrelated, can neither be formally stated nor automatically processed. Also the wealth of numerical data is only available as plain text and thus can not be processed by its actual meaning.We provide an extension to be integrated in Wikipedia, that allows the typing of links… Expand
Semantic Wikipedia
TLDR
This work provides an extension to be integrated in Wikipedia, that allows even casual users the typing of links between articles and the specification of typed data inside the articles, and gives direct access to the formalised knowledge. Expand
Making More Wikipedians: Facilitating Semantics Reuse for Wikipedia Authoring
TLDR
An integrated solution to make Wikipedia authoring easier based on RDF graph matching is proposed, expecting making more Wikipedians and enhancing the current Wikipedia to make it an even better Semantic Web data source. Expand
Semantics in Wiki
  • L. Uden
  • Computer Science
  • Community-Built Databases
  • 2011
TLDR
This chapter describes the use of semantic technologies in wiki, which provides intelligent access to heterogeneous, distributed information, enabling software products (agents) to mediate between user needs and the information sources available. Expand
Wikipedia Link Structure and Text Mining for Semantic Relation Extraction
TLDR
A consistent approach of semantic relation extraction from Wikipedia by consisting of three sub-processes highly optimized for Wikipedia mining; 1) fast pre- processing, 2) POS (Part Of Speech) tag tree analysis, and 3) mainstay extraction. Expand
Mining Meaning from Wikipedia
TLDR
This article focuses on research that extracts and makes use of the concepts, relations, facts and descriptions found in Wikipedia, and organizes the work into four broad categories: applying Wikipedia to natural language processing; using it to facilitate information retrieval and information extraction; and as a resource for ontology building. Expand
Extracting Common Sense Knowledge from Wikipedia
TLDR
The hypothesis is that common sense knowledge is often expressed in the form of generic statements such as Coee is a popular beverage, and thus this work has focussed on the challenge of automatically identifying generic statements. Expand
What Have Innsbruck and Leipzig in Common? Extracting Semantics from Wiki Content
TLDR
This article presents a method for revealing structured content by extracting information from template instances, and suggests ways to efficiently query the vast amount of extracted information, leading to astonishing query answering possibilities. Expand
Analyzing and accessing Wikipedia as a lexical semantic resource
TLDR
This work introduces a general purpose, high performance Java-based Wikipedia API that overcomes limitations and is available for research purposes at http://www.ukp.tu-darmstadt. Expand
Explorer Extracting common sense knowledge from Wikipedia
Much of the natural language text found on the web contains various kinds of generic or “common sense” knowledge, and this information has long been recognized by artificial intelligence as anExpand
Building a biomedical semantic network in Wikipedia with Semantic Wiki Links
TLDR
A straightforward method is introduced that allows Wikipedia editors to embed computable semantic relations directly in the context of current Wikipedia articles and two novel applications enabled by the presence of these new relationships are demonstrated. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 57 REFERENCES
Wikipedia and the Semantic Web The Missing Links ?
Wikipedia is the biggest collaboratively created source of encyclopaedic knowledge. Growing beyond the borders of any traditional encyclopaedia, it is facing new problems of knowledge management: TheExpand
Building a Semantic Wiki
  • A. Souzis
  • Computer Science
  • IEEE Intell. Syst.
  • 2005
TLDR
Rhizome is an experimental, open source content management framework the author have created that can capture and represent informal, human-authored content in a semantically rich manner. Expand
Semantic Wikipedia
TLDR
An extension that enables wiki-users to semantically annotate wiki pages, based on which the wiki contents can be browsed, searched, and reused in novel ways. Expand
What Have Innsbruck and Leipzig in Common? Extracting Semantics from Wiki Content
TLDR
This article presents a method for revealing structured content by extracting information from template instances, and suggests ways to efficiently query the vast amount of extracted information, leading to astonishing query answering possibilities. Expand
Computing Semantic Relatedness Using Wikipedia-based Explicit Semantic Analysis
TLDR
This work proposes Explicit Semantic Analysis (ESA), a novel method that represents the meaning of texts in a high-dimensional space of concepts derived from Wikipedia that results in substantial improvements in correlation of computed relatedness scores with human judgments. Expand
Owl web ontology language guide
TLDR
This document demonstrates the use of the OWL language to formalize a domain by defining classes and properties of those classes, define individuals and assert properties about them, and reason about these classes and individuals to the degree permitted by the formal semantics of theOWL language. Expand
OntoWiki: A Tool for Social, Semantic Collaboration
TLDR
OntoWiki facilitates the visual presentation of a knowledge base as an information map, with different views on instance data, and enables intuitive authoring of semantic content, with an inline editing mode for editing RDF content, similar to WYSIWYG for text documents. Expand
Discovering Conceptual Relations from Text
TLDR
A new approach to discover non-taxonomic conceptual relations from text building on shallow text processing techniques is described, using a generalized association rule algorithm that does not only detect relations between concepts, but also determines the appropriate level of abstraction at which to define relations. Expand
Folksonomies-Cooperative Classification and Communication Through Shared Metadata
This paper examines user-generated metadata as implemented and applied in two web services designed to share and organize digital media to better understand grassroots classification. Metadata dataExpand
Reusing Ontological Background Knowledge in Semantic Wikis
TLDR
This paper introduces an extension of Semantic MediaWiki that incorporates schema information from existing OWL ontologies that offers automatic classification of articles and aims at supporting the user in editing the wiki knowledge base in a logically consistent manner. Expand
...
1
2
3
4
5
...