Matrix reconstruction with the local max norm
@inproceedings{Foygel2012MatrixRW, title={Matrix reconstruction with the local max norm}, author={Rina Foygel and Nathan Srebro and Ruslan Salakhutdinov}, booktitle={NIPS}, year={2012} }
We introduce a new family of matrix norms, the "local max" norms, generalizing existing methods such as the max norm, the trace norm (nuclear norm), and the weighted or smoothed weighted trace norms, which have been extensively used in the literature as regularizers for matrix reconstruction problems. We show that this new family can be used to interpolate between the (weighted or unweighted) trace norm and the more conservative max norm. We test this interpolation on simulated data and on the…
20 Citations
Matrix completion with the trace norm: learning, bounding, and transducing
- Computer ScienceJ. Mach. Learn. Res.
- 2014
This paper claims that previous difficulties partially stemmed from a mismatch between the standard learning-theoretic modeling of matrix completion, and its practical application, and provides experimental and theoretical evidence that such models lead to a modest yet significant improvement.
Stochastic Optimization for Max-Norm Regularizer via Matrix Factorization
- Computer Science
- 2014
An online algorithm for solving max-norm regularized problems that is scalable to large problems that considers the matrix decomposition problem as an example, although this analysis can also be applied in other problems such as matrix completion.
Online optimization for max-norm regularization
- Computer ScienceMachine Learning
- 2017
This paper proposes an online algorithm that is scalable to large problems and proves that the sequence of the solutions produced by the algorithm converges to a stationary point of the expected loss function asymptotically.
Online Optimization for Large-Scale Max-Norm Regularization
- Computer Science
- 2014
An online algorithm that is scalable to large-scale setting for matrix decomposition problem and proves that the sequence of the solutions produced by the algorithm converges to a stationary point of the expected loss function asymptotically.
Fine-grained Generalization Analysis of Inductive Matrix Completion
- Computer ScienceNeurIPS
- 2021
The (smoothed) adjusted trace-norm minimization strategy is introduced, an inductive analogue of the weighted trace norm, for which it is confirmed that the strategy outperforms standard inductive matrix completion on various synthetic datasets and real problems, justifying its place as an important tool in the arsenal of methods for matrix completion using side information.
Near-optimal sample complexity for convex tensor completion
- Computer ScienceInformation and Inference: A Journal of the IMA
- 2018
It is proved that solving an M-norm constrained least squares (LS) problem results in nearly optimal sample complexity for low-rank tensor completion (TC), and bounds are nearly minimax rate-optimal.
Enhanced Low-Rank Matrix Approximation
- Computer ScienceIEEE Signal Processing Letters
- 2016
This letter employs parameterized nonconvex penalty functions to estimate the nonzero singular values more accurately than the nuclear norm in low-rank matrices by formulating a convex optimization problem with non Convex regularization.
LLORMA: Local Low-Rank Matrix Approximation
- Computer ScienceJ. Mach. Learn. Res.
- 2016
This paper proposes, analyzes, and experiment with two procedures, one parallel and the other global, for constructing local matrix approximations, which approximate the observed matrix as a weighted sum of low-rank matrices.
Column generation for atomic norm regularization
- Computer Science
- 2016
We consider optimization problems that consist in minimizing a quadratic function regularized by an atomic norm or an atomic gauge. We propose to solve difficult problems in this family with a column…
Interactions between rank and sparsity in penalized estimation, and detection of structured objects
- Computer Science
- 2014
Following recent successes in learning ad-hoc representations for similar problems, the method of deformable part models with high-dimensional features from convolutional neural networks is integrated and shows that this significantly decreases the error rates of existing part-based models.
References
SHOWING 1-10 OF 17 REFERENCES
Practical Large-Scale Optimization for Max-norm Regularization
- Computer ScienceNIPS
- 2010
This work uses a factorization technique of Burer and Monteiro to devise scalable first-order algorithms for convex programs involving the max-norm and these algorithms are applied to solve huge collaborative filtering, graph cut, and clustering problems.
Learning with the weighted trace-norm under arbitrary sampling distributions
- Computer ScienceNIPS
- 2011
The standard weighted-trace norm might fail when the sampling distribution is not a product distribution, and a corrected variant is presented for which strong learning guarantees are established and it is suggested that even if the true distribution is known (or is uniform), weighting by the empirical distribution may be beneficial.
Collaborative Filtering in a Non-Uniform World: Learning with the Weighted Trace Norm
- Computer ScienceNIPS
- 2010
We show that matrix completion with trace-norm regularization can be significantly hurt when entries of the matrix are sampled non-uniformly, but that a properly weighted version of the trace-norm…
Concentration-Based Guarantees for Low-Rank Matrix Reconstruction
- Computer ScienceCOLT
- 2011
This work investigates the problem of approximately reconstructing a partially-observed, approximately low-rank matrix reconstruction using both the trace-norm and the less-studied max-norm, and presents reconstruction guarantees based on existing analysis on the Rademacher complexity of the unit balls of these norms.
Restricted strong convexity and weighted matrix completion: Optimal bounds with noise
- Computer Science, MathematicsJ. Mach. Learn. Res.
- 2012
The matrix completion problem under a form of row/column weighted entrywise sampling is considered, including the case of uniformentrywise sampling as a special case, and it is proved that with high probability, it satisfies a forms of restricted strong convexity with respect to weighted Frobenius norm.
Rank, Trace-Norm and Max-Norm
- Computer Science, MathematicsCOLT
- 2005
We study the rank, trace-norm and max-norm as complexity measures of matrices, focusing on the problem of fitting a matrix with matrices having low complexity. We present generalization error bounds…
A rank minimization heuristic with application to minimum order system approximation
- Computer Science, MathematicsProceedings of the 2001 American Control Conference. (Cat. No.01CH37148)
- 2001
It is shown that the heuristic to replace the (nonconvex) rank objective with the sum of the singular values of the matrix, which is the dual of the spectral norm, can be reduced to a semidefinite program, hence efficiently solved.
Matrix Completion from Noisy Entries
- Computer ScienceJ. Mach. Learn. Res.
- 2010
This work studies a low complexity algorithm, introduced in [1], based on a combination of spectral techniques and manifold optimization, that is called here OPTSPACE, and proves performance guarantees that are order-optimal in a number of circumstances.
Fast maximum margin matrix factorization for collaborative prediction
- Computer ScienceICML
- 2005
This work investigates a direct gradient-based optimization method for MMMF and finds that MMMf substantially outperforms all nine methods he tested and demonstrates it on large collaborative prediction problems.
Probabilistic Matrix Factorization
- Computer ScienceNIPS
- 2007
The Probabilistic Matrix Factorization (PMF) model is presented, which scales linearly with the number of observations and performs well on the large, sparse, and very imbalanced Netflix dataset and is extended to include an adaptive prior on the model parameters.