#### Filter Results:

- Full text PDF available (47)

#### Publication Year

2007

2017

- This year (9)
- Last 5 years (44)
- Last 10 years (54)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Key Phrases

Learn More

- Aditya Krishna Menon, Charles Elkan
- ECML/PKDD
- 2011

We propose to solve the link prediction problem in graphs using a supervised matrix factorization approach. The model learns latent features from the topological structure of a (possibly directed) graph, and is shown to make better predictions than popular unsupervised scores. We show how these latent features may be combined with optional explicit features… (More)

In online advertising, response prediction is the problem of estimating the probability that an advertisement is clicked when displayed on a content publisher's webpage. In this paper, we show how response prediction can be viewed as a problem of matrix completion, and propose to solve it using matrix factorization techniques from collaborative filtering… (More)

This paper proposes AutoRec, a novel autoencoder framework for collaborative filtering (CF). Empirically, AutoRec’s compact and efficiently trainable model outperforms stateof-the-art CF techniques (biased matrix factorization, RBMCF and LLORMA) on the Movielens and Netflix datasets.

- Aditya Krishna Menon, Charles Elkan
- Data Mining and Knowledge Discovery
- 2010

In dyadic prediction, the input consists of a pair of items (a dyad), and the goal is to predict the value of an observation related to the dyad. Special cases of dyadic prediction include collaborative filtering, where the goal is to predict ratings associated with (user, movie) pairs, and link prediction, where the goal is to predict the presence or… (More)

- Abhishek Kumar, Shankar Vembu, Aditya Krishna Menon, Charles Elkan
- Machine Learning
- 2013

Multilabel learning is a machine learning task that is important for applications, but challenging. A recent method for multilabel learning called probabilistic classifier chains (PCCs) has several appealing properties. However, PCCs suffer from the computational issue that inference (i.e., predicting the label of an example) requires time exponential in… (More)

- Abhishek Kumar, Shankar Vembu, Aditya Krishna Menon, Charles Elkan
- ECML/PKDD
- 2012

Multilabel learning is an extension of binary classification that is both challenging and practically important. Recently, a method for multilabel learning called probabilistic classifier chains (PCCs) was proposed with numerous appealing properties, such as conceptual simplicity, flexibility, and theoretical justification. However, PCCs suffer from the… (More)

- Aditya Krishna Menon, Charles Elkan
- 2010 IEEE International Conference on Data Mining
- 2010

In dyadic prediction, labels must be predicted for pairs (dyads) whose members possess unique identifiers and, sometimes, additional features called side-information. Special cases of this problem include collaborative filtering and link prediction. We present a new {log-linear} model for dyadic prediction that is the first to satisfy several important… (More)

Random projections are a powerful method of dimensionality reduction that are noted for their simplicity and strong error guarantees. We provide a theoretical result relating to projections and how they might be used to solve a general problem, as well as theoretical results relating to various guarantees they provide. In particular, we show how they can be… (More)

- Aditya Krishna Menon, Charles Elkan
- TKDD
- 2011

A low-rank approximation to a matrix <i>A</i> is a matrix with significantly smaller rank than <i>A</i>, and which is close to <i>A</i> according to some norm. Many practical applications involving the use of large matrices focus on low-rank approximations. By reducing the rank or dimensionality of the data, we reduce the complexity of analyzing the data.… (More)

Many supervised learning problems involve learning from samples whose labels are corrupted in some way. For example, each label may be flipped with some constant probability (learning with label noise), or one may have a pool of unlabelled samples in lieu of negative samples (learning from positive and unlabelled data). This paper uses class-probability… (More)