Sharp Time–Data Tradeoffs for Linear Inverse Problems

@article{Oymak2018SharpTT,
  title={Sharp Time–Data Tradeoffs for Linear Inverse Problems},
  author={Samet Oymak and Benjamin Recht and Mahdi Soltanolkotabi},
  journal={IEEE Transactions on Information Theory},
  year={2018},
  volume={64},
  pages={4129-4158}
}
In this paper, we characterize sharp time–data tradeoffs for optimization problems used for solving linear inverse problems. We focus on the minimization of a least-squares objective subject to a constraint defined as the sub-level set of a penalty function. We present a unified convergence analysis of the gradient projection algorithm applied to such problems. We sharply characterize the convergence rate associated with a wide variety of random measurement ensembles in terms of the number of… 

Figures from this paper

Gradient descent with nonconvex constraints: local concavity determines convergence
TLDR
This paper develops the notion of local concavity coefficients of the constraint set, measuring the extent to which convexity is violated, which govern the behavior of projected gradient descent over this set and provides a convergence analysis when projections are calculated only approximately.
Time-Data Tradeoffs in Structured Signals Recovery via Proximal-Gradient Homotopy Method
TLDR
It is demonstrated that, in the absence of the strong convexity assumption, the proximal-gradient homotopy update can achieve a linear rate of convergence when the number of measurements is sufficiently large.
Structure-Adaptive, Variance-Reduced, and Accelerated Stochastic Optimization
TLDR
This work proposes an adaptive variant of the two-stage APCG method which does not need to foreknow the restricted strong convexity beforehand, but estimate it on the fly, and enjoys a local accelerated linear convergence rate with respect to the low-dimensional structure of the solution.
Tradeoffs Between Convergence Speed and Reconstruction Accuracy in Inverse Problems
TLDR
It is shown that using a coarse estimate of this set may lead to faster convergence at the cost of an additional reconstruction error related to the accuracy of the set approximation, which may provide a possible explanation to the successful approximation of the $\ell _1$ -minimization solution by neural networks with layers representing iterations.
Alternating minimization and alternating descent over nonconvex sets
TLDR
This work analyzes the performance of alternating minimization for loss functions optimized over two variables, where each variable may be restricted to lie in some potentially nonconvex constraint set and relies on the notion of local concavity coefficients, which has been proposed in Barber and Ha to measure and quantify the Concavity of a general nonconcex set.
Inexact Gradient Projection and Fast Data Driven Compressed Sensing
TLDR
This work considers different notions of approximation and shows that the progressive fixed precision and the $(1+ \varepsilon)$ -optimal oracles can achieve the same accuracy as for the exact IPG algorithm under the same embedding assumption.
Generalized Line Spectral Estimation via Convex Optimization
TLDR
It is proved that the frequencies and amplitudes of the components of the mixture can be recovered perfectly from a near-minimal number of observations via this convex program, provided the frequencies are sufficiently separated, and the linear measurements obey natural conditions that are satisfied in a variety of applications.
Fast and Reliable Parameter Estimation from Nonlinear Observations
TLDR
A framework for characterizing time-data tradeoffs for a variety of parameter estimation algorithms when the nonlinear function f is unknown is developed and it is shown that a projected gradient descent scheme converges at a linear rate to a reliable solution with a near minimal number of samples.
Constrained Optimization Involving Nonconvex 𝓁p Norms: Optimality Conditions, Algorithm and Convergence
TLDR
The optimality conditions for characterizing the local minimizers of the constrained optimization problems involving an `p norm (0 < p < 1) of the variables, which may appear in either the objective or the constraint, are investigated.
A simple homotopy proximal mapping algorithm for compressive sensing
TLDR
It is proved that when the measurement matrix satisfies restricted isometric properties (RIP), one of the proposed algorithms with an appropriate setting of a parameter based on the RIP constants converges linearly to the optimal solution up to the noise level.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 120 REFERENCES
Fast global convergence rates of gradient methods for high-dimensional statistical recovery
TLDR
The theory guarantees that Nesterov's first-order method has a globally geometric rate of convergence up to the statistical precision of the model, meaning the typical Euclidean distance between the true unknown parameter θ* and the optimal solution ^θ.
Simple Bounds for Noisy Linear Inverse Problems with Exact Side Information
TLDR
It is shown that, if precise information about the value f(x_0) or the l_2-norm of the noise is available, one can do a particularly good job at estimation, and the reconstruction error becomes proportional to the “sparsity” of the signal rather than to the ambient dimension of the Noise vector.
Null space conditions and thresholds for rank minimization
TLDR
This paper characterize properties of the null space of the linear operator defining the constraint set that are necessary and sufficient for the heuristic to succeed, and obtains dimension-free bounds under which these null space properties hold almost surely as the matrix dimensions tend to infinity.
A Proximal-Gradient Homotopy Method for the Sparse Least-Squares Problem
TLDR
This paper shows that under suitable assumptions for sparse recovery, the proposed homotopy strategy ensures that all iterates along thehomotopy sol...
The Convex Geometry of Linear Inverse Problems
TLDR
This paper provides a general framework to convert notions of simplicity into convex penalty functions, resulting in convex optimization solutions to linear, underdetermined inverse problems.
Decomposable norm minimization with proximal-gradient homotopy algorithm
TLDR
It is shown that if the linear sampling matrix satisfies certain assumptions and the regularizing norm is decomposable, proximal-gradient homotopy algorithm converges with a linear rate even though the objective function is not strongly convex.
Gradient methods for minimizing composite objective function
In this paper we analyze several new methods for solving optimization problems with the objective function formed as a sum of two convex terms: one is smooth and given by a black-box oracle, and
Sparse reconstruction by convex relaxation: Fourier and Gaussian measurements
TLDR
The first guarantees for universal measurements (i.e. which work for all sparse functions) with reasonable constants are proved, based on the technique of geometric functional analysis and probability in Banach spaces.
Gradient Projection for Sparse Reconstruction: Application to Compressed Sensing and Other Inverse Problems
TLDR
This paper proposes gradient projection algorithms for the bound-constrained quadratic programming (BCQP) formulation of these problems and test variants of this approach that select the line search parameters in different ways, including techniques based on the Barzilai-Borwein method.
Simple bounds for recovering low-complexity models
TLDR
A unified analysis of the recovery of simple objects from random linear measurements shows that an s-sparse vector in $${\mathbb{R}^n}$$ can be efficiently recovered from 2s log n measurements with high probability and a rank r, n × n matrix can be efficient recovered from r(6n − 5r) measurements withhigh probability.
...
1
2
3
4
5
...