• Publications
  • Influence
Making Deep Neural Networks Robust to Label Noise: A Loss Correction Approach
TLDR
It is proved that, when ReLU is the only non-linearity, the loss curvature is immune to class-dependent label noise, and it is shown how one can estimate these probabilities, adapting a recent technique for noise estimation to the multi-class setting, and providing an end-to-end framework.
AutoRec: Autoencoders Meet Collaborative Filtering
TLDR
Empirically, AutoRec's compact and efficiently trainable model outperforms state-of-the-art CF techniques (biased matrix factorization, RBM-CF and LLORMA) on the Movielens and Netflix datasets.
Link Prediction via Matrix Factorization
TLDR
The model learns latent features from the topological structure of a (possibly directed) graph, and is shown to make better predictions than popular unsupervised scores, and may be combined with optional explicit features for nodes or edges, which yields better performance.
Long-tail learning via logit adjustment
TLDR
These techniques revisit the classic idea of logit adjustment based on the label frequencies, either applied post-hoc to a trained model, or enforced in the loss during training, to encourage a large relative margin between logits of rare versus dominant labels.
The cost of fairness in binary classification
TLDR
This work relates two existing fairness measures to cost-sensitive risks, and shows that for such costsensitive fairness measures, the optimal classifier is an instance-dependent thresholding of the class-probability function.
Learning from Corrupted Binary Labels via Class-Probability Estimation
TLDR
This paper uses class-probability estimation to study corruption processes belonging to the mutually contaminated distributions framework, with three conclusions: one can optimise balanced error and AUC without knowledge of the corruption parameters, and one can minimise a range of classification risks.
Anomaly Detection using One-Class Neural Networks
TLDR
A comprehensive set of experiments demonstrate that on complex data sets (like CIFAR and PFAM), OC-NN significantly outperforms existing state-of-the-art anomaly detection methods.
Learning with Symmetric Label Noise: The Importance of Being Unhinged
TLDR
It is shown that the optimal unhinged solution is equivalent to that of a strongly regularised SVM, and is the limiting solution for any convex potential; this implies that strong l2 regularisation makes most standard learners SLN-robust.
Response prediction using collaborative filtering with hierarchies and side-information
TLDR
This paper shows how response prediction can be viewed as a problem of matrix completion, and proposes to solve it using matrix factorization techniques from collaborative filtering (CF), and shows how this factorization can be seamlessly combined with explicit features or side-information for pages and ads, which let us combine the benefits of both approaches.
Robust, Deep and Inductive Anomaly Detection
TLDR
This paper addresses both issues in a single model, the robust autoencoder, which learns a nonlinear subspace that captures the majority of data points, while allowing for some data to have arbitrary corruption.
...
1
2
3
4
5
...