## 131 Citations

Infinite-Dimensional Compressed Sensing and Function Interpolation

- Computer ScienceFound. Comput. Math.
- 2018

A framework for function interpolation using compressed sensing that does not require a priori bounds on the expansion tail in either its implementation or its theoretical guarantees and in the absence of noise leads to genuinely interpolatory approximations is introduced.

Compressive Hermite Interpolation: Sparse, High-Dimensional Approximation from Gradient-Augmented Measurements

- Computer Science, MathematicsConstructive Approximation
- 2019

This work considers the sparse polynomial approximation of a multivariate function on a tensor product domain from samples of both the function and its gradient, and shows that for the same asymptotic sample complexity, gradient-augmented measurements achieve an approximation error bound in a stronger Sobolev norm, as opposed to the $$L^2$$L2-norm in the unaugmenting case.

Recovery Analysis for Weighted ℓ1-Minimization Using a Null Space Property

- Computer ScienceArXiv
- 2014

A Weighted 𝓁1-Minimization Approach For Wavelet Reconstruction of Signals and Images

- Computer ScienceArXiv
- 2019

This work recovers the wavelet coefficients associated to the functional representation of the object of interest by solving the proposed optimization problem and gives a specific choice of weights and shows numerically that the chosen weights admit efficient recovery of objects of interest from either a set of sub-samples or a noisy version.

New Conditions on Stable Recovery of Weighted Sparse Signals via Weighted $$l_1$$l1 Minimization

- Computer ScienceCircuits Syst. Signal Process.
- 2018

It is demonstrated that the weighted RIP with small $$delta _{\mathbf {w},2s}$$δw,2s implies the weighted $$l_1$$l1-robust NSP of order s.

Convergence bounds for nonlinear least squares and applications to tensor recovery

- Computer Science, MathematicsArXiv
- 2021

This work considers the problem of approximating a function in general nonlinear subsets of L2 when only a weighted Monte Carlo estimate of the L2-norm can be computed and derives a new bound that is able to utilize the regularity of the sought function.

Infinite-Dimensional $$\ell ^1$$ℓ1 Minimization and Function Approximation from Pointwise Data

- Computer Science, Mathematics
- 2015

It is shown that weighted $$\ell ^1$$ℓ1 minimization with Jacobi polynomials leads to an optimal method for approximating smooth, one-dimensional functions from scattered data.

A mixed ℓ1 regularization approach for sparse simultaneous approximation of parameterized PDEs

- Computer Science, MathematicsESAIM: Mathematical Modelling and Numerical Analysis
- 2019

A novel sparse polynomial technique for the simultaneous approximation of parameterized partial differential equations (PDEs) with deterministic and stochastic inputs is presented and it is proved that, with minimal sample complexity, error estimates comparable to the best s-term and quasi-optimal approximations are achievable.

Improved bounds for sparse recovery from subsampled random convolutions

- Computer Science, MathematicsThe Annals of Applied Probability
- 2018

If the sparsity of a subgaussian generator with independent entries is small enough, it is shown that m measurements are sufficient to recover s-sparse vectors in dimension n with high probability, matching the well-known condition for recovery from standard Gaussian measurements.

Correcting for unknown errors in sparse high-dimensional function approximation

- Computer ScienceNumerische Mathematik
- 2019

The lesser-known square-root LASSO is better suited for high-dimensional approximation than the other procedures in the case of bounded noise, since it avoids (both theoretically and numerically) the need for parameter tuning.

## References

SHOWING 1-10 OF 49 REFERENCES

Analyzing Weighted $\ell_1$ Minimization for Sparse Recovery With Nonuniform Sparse Models

- Computer ScienceIEEE Transactions on Signal Processing
- 2011

It is demonstrated through rigorous analysis and simulations that for the case when the support of the signal can be divided into two different subclasses with unequal sparsity fractions, the weightedℓ1 minimization outperforms the regular ℓ 1 minimization substantially.

On sparse reconstruction from Fourier and Gaussian measurements

- Computer Science, Mathematics
- 2008

This paper improves upon best‐known guarantees for exact reconstruction of a sparse signal f from a small universal sample of Fourier measurements by showing that there exists a set of frequencies Ω such that one can exactly reconstruct every r‐sparse signal f of length n from its frequencies in Ω, using the convex relaxation.

Compressive sensing for sparse approximations: constructions, algorithms, and analysis

- Computer Science
- 2010

This thesis proposes and discusses a compressive sensing scheme with deterministic performance guarantees using deterministic explicitly constructible expander graph-based measurement matrices, and shows that the sparse signal recovery can be achieved with linear complexity, and introduces a unified null-space Grassmann angle-based analytical framework.

Sparse Legendre expansions via l1-minimization

- Mathematics, Computer ScienceJ. Approx. Theory
- 2012

RECOVERY OF FUNCTIONS OF MANY VARIABLES VIA COMPRESSIVE SENSING

- Mathematics, Computer Science
- 2011

This work builds on ideas from compressive sensing and introduces a function model that involves “sparsity with respect to dimensions” in the Fourier domain and shows that the number of required samples scales only logarithmically in the spatial dimension provided the function to be recovered follows the newly introduced highdimensional function model.

Weighted ℓ1 minimization for sparse recovery with prior information

- Computer Science2009 IEEE International Symposium on Information Theory
- 2009

This paper proposes a weighted ℓ1 minimization recovery algorithm and analyzes its performance using a Grassman angle approach on a model where the entries of the unknown vector fall into two sets, each with a different probability of being nonzero.

Model-Based Compressive Sensing

- Computer ScienceIEEE Transactions on Information Theory
- 2010

A model-based CS theory is introduced that parallels the conventional theory and provides concrete guidelines on how to create model- based recovery algorithms with provable performance guarantees and a new class of structured compressible signals along with a new sufficient condition for robust structured compressable signal recovery that is the natural counterpart to the restricted isometry property of conventional CS.

A short note on compressed sensing with partially known signal support

- Computer ScienceSignal Process.
- 2010

On the Stability and Accuracy of Least Squares Approximations

- Mathematics, Computer ScienceFound. Comput. Math.
- 2013

This work provides a criterion on m that describes the needed amount of regularization to ensure that the least squares method is stable and that its accuracy, measured in L2(X,ρX), is comparable to the best approximation error of f by elements from Vm.

Modified-CS: Modifying Compressive Sensing for Problems With Partially Known Support

- MathematicsIEEE Transactions on Signal Processing
- 2010

The idea of the proposed solution (modified-CS) is to solve a convex relaxation of the following problem: find the signal that satisfies the data constraint and is sparsest outside of T, and obtain sufficient conditions for exact reconstruction using modified-CS.