Learn More
Entity Disambiguation aims to link mentions of ambiguous entities to a knowledge base (e.g., Wikipedia). Modeling topical coherence is crucial for this task based on the assumption that information from the same semantic context tends to belong to the same topic. This paper presents a novel deep semantic related-ness model (DSRM) based on deep neu-ral(More)
Disambiguation to Wikipedia (D2W) is the task of linking mentions of concepts in text to their corresponding Wikipedia entries. Most previous work has focused on linking terms in formal texts (e.g. newswire) to Wikipedia. Linking terms in short informal texts (e.g. tweets) is difficult for systems and humans alike as they lack a rich disambiguation context.(More)
Wikification for tweets aims to automatically identify each concept mention in a tweet and link it to a concept referent in a knowledge base (e.g., Wikipedia). Due to the shortness of a tweet, a collective inference model incorporating global evidence from multiple mentions and concepts is more appropriate than a non-collecitve approach which links each(More)
Information Extraction using multiple information sources and systems is beneficial due to multi-source/system consolidation and challenging due to the resulting inconsistency and redundancy. We integrate IE and truth-finding research and present a novel unsupervised multi-dimensional truth finding framework which incorporates signals from multiple sources,(More)
Ranking tweets is a fundamental task to make it easier to distill the vast amounts of information shared by users. In this paper, we explore the novel idea of ranking tweets on a topic using heterogeneous networks. We construct heterogeneous networks by harnessing cross-genre linkages between tweets and semantically-related web documents from formal genres,(More)
In some societies, internet users have to create information morphs (e.g. " Peace West King " to refer to " Bo Xilai ") to avoid active censorship or achieve other communication goals. In this paper we aim to solve a new problem of resolving entity morphs to their real targets. We exploit temporal constraints to collect cross-source comparable corpora(More)
This year the RPI-BLENDER team participated in the following four tasks: English Entity Linking, Regular Slot Filling, Temporal Slot Filling and Slot Filling Validation. The major improvement was made for Regular Slot Filling and Slot Filling validation. We developed a fresh system for both tasks. Our approach embraces detailed linguistic analysis and(More)
—This paper presents a novel method to learn neural knowledge graph embeddings. The embeddings are used to compute semantic relatedness in a coherence-based semantic parser. The approach learns embeddings directly from structured knowledge representations. A deep neural network approach known as Deep Structured Semantic Modeling (DSSM) is used to scale the(More)
This demonstration illustrates an information aggregation and summarization service for social sensing applications. Social sensing, using mobile phones and other networked devices in the possession of individuals, has gained significant popularity in recent years. In some cases, the information collected is structured, such as numeric data from temperature(More)
Internet users are keen on creating different kinds of morphs to avoid censorship, express strong sentiment or humor. For example, in Chinese social media, users often use the entity morph " 方便面 (Instant Noodles) " to refer to " 周永康 (Zhou Yongkang) " because it shares one character " 康 (Kang) " with the well-known brand of instant noodles " 康师傅 (Master(More)