• Corpus ID: 2847351

Matrix reconstruction with the local max norm

  title={Matrix reconstruction with the local max norm},
  author={Rina Foygel and Nathan Srebro and Ruslan Salakhutdinov},
We introduce a new family of matrix norms, the "local max" norms, generalizing existing methods such as the max norm, the trace norm (nuclear norm), and the weighted or smoothed weighted trace norms, which have been extensively used in the literature as regularizers for matrix reconstruction problems. We show that this new family can be used to interpolate between the (weighted or unweighted) trace norm and the more conservative max norm. We test this interpolation on simulated data and on the… 

Figures and Tables from this paper

Matrix completion with the trace norm: learning, bounding, and transducing

This paper claims that previous difficulties partially stemmed from a mismatch between the standard learning-theoretic modeling of matrix completion, and its practical application, and provides experimental and theoretical evidence that such models lead to a modest yet significant improvement.

Stochastic Optimization for Max-Norm Regularizer via Matrix Factorization

An online algorithm for solving max-norm regularized problems that is scalable to large problems that considers the matrix decomposition problem as an example, although this analysis can also be applied in other problems such as matrix completion.

Online optimization for max-norm regularization

This paper proposes an online algorithm that is scalable to large problems and proves that the sequence of the solutions produced by the algorithm converges to a stationary point of the expected loss function asymptotically.

Online Optimization for Large-Scale Max-Norm Regularization

An online algorithm that is scalable to large-scale setting for matrix decomposition problem and proves that the sequence of the solutions produced by the algorithm converges to a stationary point of the expected loss function asymptotically.

Fine-grained Generalization Analysis of Inductive Matrix Completion

The (smoothed) adjusted trace-norm minimization strategy is introduced, an inductive analogue of the weighted trace norm, for which it is confirmed that the strategy outperforms standard inductive matrix completion on various synthetic datasets and real problems, justifying its place as an important tool in the arsenal of methods for matrix completion using side information.

Near-optimal sample complexity for convex tensor completion

It is proved that solving an M-norm constrained least squares (LS) problem results in nearly optimal sample complexity for low-rank tensor completion (TC), and bounds are nearly minimax rate-optimal.

Enhanced Low-Rank Matrix Approximation

This letter employs parameterized nonconvex penalty functions to estimate the nonzero singular values more accurately than the nuclear norm in low-rank matrices by formulating a convex optimization problem with non Convex regularization.

LLORMA: Local Low-Rank Matrix Approximation

This paper proposes, analyzes, and experiment with two procedures, one parallel and the other global, for constructing local matrix approximations, which approximate the observed matrix as a weighted sum of low-rank matrices.

Column generation for atomic norm regularization

We consider optimization problems that consist in minimizing a quadratic function regularized by an atomic norm or an atomic gauge. We propose to solve difficult problems in this family with a column

Interactions between rank and sparsity in penalized estimation, and detection of structured objects

Following recent successes in learning ad-hoc representations for similar problems, the method of deformable part models with high-dimensional features from convolutional neural networks is integrated and shows that this significantly decreases the error rates of existing part-based models.



Practical Large-Scale Optimization for Max-norm Regularization

This work uses a factorization technique of Burer and Monteiro to devise scalable first-order algorithms for convex programs involving the max-norm and these algorithms are applied to solve huge collaborative filtering, graph cut, and clustering problems.

Learning with the weighted trace-norm under arbitrary sampling distributions

The standard weighted-trace norm might fail when the sampling distribution is not a product distribution, and a corrected variant is presented for which strong learning guarantees are established and it is suggested that even if the true distribution is known (or is uniform), weighting by the empirical distribution may be beneficial.

Collaborative Filtering in a Non-Uniform World: Learning with the Weighted Trace Norm

We show that matrix completion with trace-norm regularization can be significantly hurt when entries of the matrix are sampled non-uniformly, but that a properly weighted version of the trace-norm

Concentration-Based Guarantees for Low-Rank Matrix Reconstruction

This work investigates the problem of approximately reconstructing a partially-observed, approximately low-rank matrix reconstruction using both the trace-norm and the less-studied max-norm, and presents reconstruction guarantees based on existing analysis on the Rademacher complexity of the unit balls of these norms.

Restricted strong convexity and weighted matrix completion: Optimal bounds with noise

The matrix completion problem under a form of row/column weighted entrywise sampling is considered, including the case of uniformentrywise sampling as a special case, and it is proved that with high probability, it satisfies a forms of restricted strong convexity with respect to weighted Frobenius norm.

Rank, Trace-Norm and Max-Norm

We study the rank, trace-norm and max-norm as complexity measures of matrices, focusing on the problem of fitting a matrix with matrices having low complexity. We present generalization error bounds

A rank minimization heuristic with application to minimum order system approximation

It is shown that the heuristic to replace the (nonconvex) rank objective with the sum of the singular values of the matrix, which is the dual of the spectral norm, can be reduced to a semidefinite program, hence efficiently solved.

Matrix Completion from Noisy Entries

This work studies a low complexity algorithm, introduced in [1], based on a combination of spectral techniques and manifold optimization, that is called here OPTSPACE, and proves performance guarantees that are order-optimal in a number of circumstances.

Fast maximum margin matrix factorization for collaborative prediction

This work investigates a direct gradient-based optimization method for MMMF and finds that MMMf substantially outperforms all nine methods he tested and demonstrates it on large collaborative prediction problems.

Probabilistic Matrix Factorization

The Probabilistic Matrix Factorization (PMF) model is presented, which scales linearly with the number of observations and performs well on the large, sparse, and very imbalanced Netflix dataset and is extended to include an adaptive prior on the model parameters.