#### Filter Results:

- Full text PDF available (121)

#### Publication Year

2000

2017

- This year (12)
- Last 5 years (58)
- Last 10 years (97)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Data Set Used

#### Key Phrases

#### Method

Learn More

- Shai Shalev-Shwartz, Yoram Singer, Nathan Srebro, Andrew Cotter
- Math. Program.
- 2007

We describe and analyze a simple and effective iterative algorithm for solving the optimization problem cast by Support Vector Machines (SVM). Our method alternates between stochastic gradient descent steps and projection steps. We prove that the number of iterations required to obtain a solution of accuracy ε is Õ(1/ε). In contrast, previous… (More)

- Nathan Srebro, Jason D. M. Rennie, Tommi S. Jaakkola
- NIPS
- 2004

We present a novel approach to collaborative prediction, using low-norm instead of low-rank factorizations. The approach is inspired by, and has strong connections to, large-margin linear discrimination. We show how to learn low-norm factorizations by solving a semi-definite program, and discuss generalization error bounds for them.

- Jason D. M. Rennie, Nathan Srebro
- ICML
- 2005

Maximum Margin Matrix Factorization (MMMF) was recently suggested (Srebro et al., 2005) as a convex, infinite dimensional alternative to low-rank approximations and standard factor models. MMMF can be formulated as a semi-definite programming (SDP) and learned using standard SDP solvers. However, current SDP solvers can only handle MMMF problems on matrices… (More)

- Nathan Srebro, Tommi S. Jaakkola
- ICML
- 2003

We study the common problem of approximating a target matrix with a matrix of lower rank. We provide a simple and efficient (EM) algorithm for solving weighted low-rank approximation problems, which, unlike their unweighted version, do not admit a closed-form solution in general. We analyze, in addition , the nature of locally optimal solutions that arise… (More)

- Maria-Florina Balcan, Avrim Blum, Nathan Srebro
- COLT
- 2008

We continue the investigation of natural conditions for a similarity function to allow learning, without requiring the similarity function to be a valid kernel , or referring to an implicit high-dimensional space. We provide a new notion of a " good similarity function " that builds upon the previous definition of Balcan and Blum (2006) but improves on it… (More)

- Moritz Hardt, Eric Price, Nathan Srebro
- NIPS
- 2016

We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. Assuming data about the predictor, target, and membership in the protected group are available , we show how to optimally adjust any learned predictor so as to remove discrimination… (More)

- Yonatan Amit, Michael Fink, Nathan Srebro, Shimon Ullman
- ICML
- 2007

This paper suggests a method for multiclass learning with many classes by simultaneously learning shared characteristics common to the classes, and predictors for the classes in terms of these characteristics. We cast this as a convex optimization problem, using <i>trace-norm</i> regularization and study gradient-based optimization both for the linear case… (More)

- Andreas Argyriou, Rina Foygel, Nathan Srebro
- NIPS
- 2012

We derive a novel norm that corresponds to the tightest convex relaxation of spar-sity combined with an 2 penalty. We show that this new k-support norm provides a tighter relaxation than the elastic net and can thus be advantageous in in sparse prediction problems. We also bound the looseness of the elastic net, thus shedding new light on it and providing… (More)

- Nathan Srebro, Adi Shraibman
- COLT
- 2005

We study the rank, trace-norm and max-norm as complexity measures of matrices, focusing on the problem of fitting a matrix with matrices having low complexity. We present generalization error bounds for predicting unobserved entries that are based on these measures. We also consider the possible relations between these measures. We show gaps between them,… (More)

- Shai Shalev-Shwartz, Ohad Shamir, Nathan Srebro, Karthik Sridharan
- Journal of Machine Learning Research
- 2010

The problem of characterizing learnability is the most basic question of statistical learning theory. A fundamental and long-standing answer, at least for the case of supervised classification and regression, is that learnability is equivalent to uniform convergence of the empirical risk to the population risk, and that if a problem is learnable, it is… (More)