Interpolation via weighted $l_1$ minimization

@article{Rauhut2013InterpolationVW,
  title={Interpolation via weighted \$l\_1\$ minimization},
  author={Holger Rauhut and Rachel A. Ward},
  journal={arXiv: Functional Analysis},
  year={2013}
}

Figures from this paper

Infinite-Dimensional Compressed Sensing and Function Interpolation
  • B. Adcock
  • Computer Science
    Found. Comput. Math.
  • 2018
TLDR
A framework for function interpolation using compressed sensing that does not require a priori bounds on the expansion tail in either its implementation or its theoretical guarantees and in the absence of noise leads to genuinely interpolatory approximations is introduced.
Compressive Hermite Interpolation: Sparse, High-Dimensional Approximation from Gradient-Augmented Measurements
  • B. AdcockYi Sui
  • Computer Science, Mathematics
    Constructive Approximation
  • 2019
TLDR
This work considers the sparse polynomial approximation of a multivariate function on a tensor product domain from samples of both the function and its gradient, and shows that for the same asymptotic sample complexity, gradient-augmented measurements achieve an approximation error bound in a stronger Sobolev norm, as opposed to the $$L^2$$L2-norm in the unaugmenting case.
Recovery Analysis for Weighted ℓ1-Minimization Using a Null Space Property
A Weighted 𝓁1-Minimization Approach For Wavelet Reconstruction of Signals and Images
TLDR
This work recovers the wavelet coefficients associated to the functional representation of the object of interest by solving the proposed optimization problem and gives a specific choice of weights and shows numerically that the chosen weights admit efficient recovery of objects of interest from either a set of sub-samples or a noisy version.
New Conditions on Stable Recovery of Weighted Sparse Signals via Weighted $$l_1$$l1 Minimization
TLDR
It is demonstrated that the weighted RIP with small $$delta _{\mathbf {w},2s}$$δw,2s implies the weighted $$l_1$$l1-robust NSP of order s.
Convergence bounds for nonlinear least squares and applications to tensor recovery
TLDR
This work considers the problem of approximating a function in general nonlinear subsets of L2 when only a weighted Monte Carlo estimate of the L2-norm can be computed and derives a new bound that is able to utilize the regularity of the sought function.
Infinite-Dimensional $$\ell ^1$$ℓ1 Minimization and Function Approximation from Pointwise Data
TLDR
It is shown that weighted $$\ell ^1$$ℓ1 minimization with Jacobi polynomials leads to an optimal method for approximating smooth, one-dimensional functions from scattered data.
A mixed ℓ1 regularization approach for sparse simultaneous approximation of parameterized PDEs
TLDR
A novel sparse polynomial technique for the simultaneous approximation of parameterized partial differential equations (PDEs) with deterministic and stochastic inputs is presented and it is proved that, with minimal sample complexity, error estimates comparable to the best s-term and quasi-optimal approximations are achievable.
Improved bounds for sparse recovery from subsampled random convolutions
TLDR
If the sparsity of a subgaussian generator with independent entries is small enough, it is shown that m measurements are sufficient to recover s-sparse vectors in dimension n with high probability, matching the well-known condition for recovery from standard Gaussian measurements.
Correcting for unknown errors in sparse high-dimensional function approximation
TLDR
The lesser-known square-root LASSO is better suited for high-dimensional approximation than the other procedures in the case of bounded noise, since it avoids (both theoretically and numerically) the need for parameter tuning.
...
...

References

SHOWING 1-10 OF 49 REFERENCES
Analyzing Weighted $\ell_1$ Minimization for Sparse Recovery With Nonuniform Sparse Models
TLDR
It is demonstrated through rigorous analysis and simulations that for the case when the support of the signal can be divided into two different subclasses with unequal sparsity fractions, the weightedℓ1 minimization outperforms the regular ℓ 1 minimization substantially.
On sparse reconstruction from Fourier and Gaussian measurements
TLDR
This paper improves upon best‐known guarantees for exact reconstruction of a sparse signal f from a small universal sample of Fourier measurements by showing that there exists a set of frequencies Ω such that one can exactly reconstruct every r‐sparse signal f of length n from its frequencies in Ω, using the convex relaxation.
Compressive sensing for sparse approximations: constructions, algorithms, and analysis
TLDR
This thesis proposes and discusses a compressive sensing scheme with deterministic performance guarantees using deterministic explicitly constructible expander graph-based measurement matrices, and shows that the sparse signal recovery can be achieved with linear complexity, and introduces a unified null-space Grassmann angle-based analytical framework.
Sparse Legendre expansions via l1-minimization
RECOVERY OF FUNCTIONS OF MANY VARIABLES VIA COMPRESSIVE SENSING
TLDR
This work builds on ideas from compressive sensing and introduces a function model that involves “sparsity with respect to dimensions” in the Fourier domain and shows that the number of required samples scales only logarithmically in the spatial dimension provided the function to be recovered follows the newly introduced highdimensional function model.
Weighted ℓ1 minimization for sparse recovery with prior information
TLDR
This paper proposes a weighted ℓ1 minimization recovery algorithm and analyzes its performance using a Grassman angle approach on a model where the entries of the unknown vector fall into two sets, each with a different probability of being nonzero.
Model-Based Compressive Sensing
TLDR
A model-based CS theory is introduced that parallels the conventional theory and provides concrete guidelines on how to create model- based recovery algorithms with provable performance guarantees and a new class of structured compressible signals along with a new sufficient condition for robust structured compressable signal recovery that is the natural counterpart to the restricted isometry property of conventional CS.
A short note on compressed sensing with partially known signal support
On the Stability and Accuracy of Least Squares Approximations
TLDR
This work provides a criterion on m that describes the needed amount of regularization to ensure that the least squares method is stable and that its accuracy, measured in L2(X,ρX), is comparable to the best approximation error of f by elements from Vm.
Modified-CS: Modifying Compressive Sensing for Problems With Partially Known Support
TLDR
The idea of the proposed solution (modified-CS) is to solve a convex relaxation of the following problem: find the signal that satisfies the data constraint and is sparsest outside of T, and obtain sufficient conditions for exact reconstruction using modified-CS.
...
...