• Publications
  • Influence
Sequential Click Prediction for Sponsored Search with Recurrent Neural Networks
TLDR
We introduce a novel framework based on Recurrent Neural Networks (RNN). Expand
RC-NET: A General Framework for Incorporating Knowledge into Word Representations
TLDR
In this paper, we introduce a novel framework called RC-NET to leverage both the relational and categorical knowledge to produce word representations of higher quality. Expand
Predicting information seeker satisfaction in community question answering
TLDR
In this paper we introduce the problem of predicting information seeker satisfaction in collaborative question answering communities, where we attempt to predict whether a question author will be satisfied with the answers submitted by the community participants. Expand
Listening to Chaotic Whispers: A Deep Learning Framework for News-oriented Stock Trend Prediction
TLDR
We imitate the learning process of human beings facing such chaotic online news, driven by three principles: sequential content dependency, diverse influence, and effective and efficient learning. Expand
A Probabilistic Model for Learning Multi-Prototype Word Embeddings
TLDR
We propose a probabilistic multi-prototype model for learning Multi-Prototype Word Embeddings. Expand
Finding the right facts in the crowd: factoid question answering over social media
TLDR
We present a general ranking framework for factual information retrieval from social media that combines both relevance and quality. Expand
Exploring social annotations for information retrieval
TLDR
Social annotation has gained increasing popularity in many Web-based applications, leading to an emerging research area in text analysis and information retrieval. Expand
Learning to recognize reliable users and content in social media with coupled mutual reinforcement
TLDR
We develop a semi-supervised coupled mutual reinforcement framework for simultaneously calculating content quality and user reputation, that requires relatively few labeled examples to initialize the training process, that improves the accuracy of search over CQA archives over the state-of-the-art methods. Expand
Knowledge-Powered Deep Learning for Word Embedding
TLDR
We conduct an empirical study on the capacity of leveraging morphological, syntactic, and semantic knowledge to achieve high-quality word embeddings. Expand
Dual Supervised Learning
TLDR
We propose training the models of two dual tasks simultaneously, and explicitly exploiting the probabilistic correlation between them to regularize the training process. Expand
...
1
2
3
4
5
...