Sahand Negahban

Learn More
High-dimensional statistical inference deals with models in which the the number of parameters p is comparable to or larger than the sample size n. Since it is usually impossible to obtain consistent procedures unless p/n → 0, a line of recent work has studied models with various types of low-dimensional structure, including sparse vectors, sparse and(More)
We consider the matrix completion problem under a form of row/column weighted entrywise sampling, including the case of uniform entrywise sampling as a special case. We analyze the associated random observation operator, and prove that with high probability, it satisfies a form of restricted strong convexity with respect to weighted Frobenius norm. Using(More)
We analyze a class of estimators based on convex relaxation for solving high-dimensional matrix decomposition problems. The observations are noisy realizations of a linear transformation X of the sum of an (approximately) low rank matrix Θ⋆ with a second matrix Γ⋆ endowed with a complementary form of low-dimensional structure; this set-up includes many(More)
We study an instance of high-dimensional inference in which the goal is to estimate a matrix Θ ∈ R12 on the basis of N noisy observations. The unknown matrix Θ is assumed to be either exactly low rank, or “near” low-rank, meaning that it can be wellapproximated by a matrix with low rank. We consider a standard M -estimator based on regularization by the(More)
Sahand Negahban, Department of EECS, Massachusetts Institute of Technology, Cambridge MA 02139 (e-mail: sahandn@mit.edu). Pradeep Ravikumar, Department of CS, University of Texas, Austin, Austin, TX 78701 (e-mail: pradeepr@cs.utexas.edu). Martin J. Wainwright, Department of EECS and Statistics, University of California Berkeley, Berkeley CA 94720 (e-mail:(More)
Many statistical M -estimators are based on convex optimization problems formed by the combination of a data-dependent loss function with a norm-based regularizer. We analyze the convergence rates of projected gradient and composite gradient methods for solving such problems, working within a high-dimensional framework that allows the data dimension d to(More)
We propose a general framework for increasing local stability of Artificial Neural Nets (ANNs) using Robust Optimization (RO). We achieve this through an alternating minimization-maximization procedure, in which the loss of the network is minimized over perturbed examples that are generated at each parameter update. We show that adversarial training of ANNs(More)
The question of aggregating pairwise comparisons to obtain a global ranking over a collection of objects has been of interest for a very long time: be it ranking of online gamers (e.g. MSR’s TrueSkill system) and chess players, aggregating social opinions, or deciding which product to sell based on transactions. In most settings, in addition to obtaining a(More)