Corpus ID: 219966527

The Generalized Lasso with Nonlinear Observations and Generative Priors

@article{Liu2020TheGL,
  title={The Generalized Lasso with Nonlinear Observations and Generative Priors},
  author={Zhaoqiang Liu and J. Scarlett},
  journal={ArXiv},
  year={2020},
  volume={abs/2006.12415}
}
In this paper, we study the problem of signal estimation from noisy non-linear measurements when the unknown $n$-dimensional signal is in the range of an $L$-Lipschitz continuous generative model with bounded $k$-dimensional inputs. We make the assumption of sub-Gaussian measurements, which is satisfied by a wide range of measurement models, such as linear, logistic, 1-bit, and other quantized models. In addition, we consider the impact of adversarial corruptions on these measurements. Our… Expand

Tables from this paper

A Unified Approach to Uniform Signal Recovery From Non-Linear Observations
TLDR
A unified approach to uniform recovery from non-linear observations under the assumption i.i.d. sub-Gaussian measurement vectors is developed, which shows that a simple least-squares estimator with any convex constraint can serve as a universal recovery strategy, which is outlier robust and does not require explicit knowledge of the underlying non- linearity. Expand
Taking the Edge off Quantization: Projected Back Projection in Dithered Compressive Sensing
TLDR
It is shown here that a simple uniform scalar quantizer is compatible with a large class of random sensing matrices known to respect, with high probability, the restricted isometry property (RIP), and it is validated numerically the predicted error decay as the number of measurements increases. Expand
Sparse Sliced Inverse Regression via Lasso
  • Q. Lin, Z. Zhao, J. Liu
  • Mathematics, Medicine
  • Journal of the American Statistical Association
  • 2019
TLDR
The resulting algorithm, Lasso-SIR, is shown to be consistent and achieves the optimal convergence rate under certain sparsity conditions when p is of order, where λ is the generalized signal-to-noise ratio. Expand
A Simple Bound on the BER of the Map Decoder for Massive MIMO Systems
TLDR
This paper proves a non-trivial upper bound on the bit-error rate (BER) of the MAP detector for BPSK signal transmission and equal-power condition and shows that the bound is approximately tight at high-SNR. Expand
Unpacking the Expressed Consequences of AI Research in Broader Impact Statements
TLDR
A qualitative thematic analysis of a sample of statements written for the NeurIPS 2020 conference identifies themes related to how consequences are expressed, areas of impacts expressed, and researchers' recommendations for mitigating negative consequences in the future. Expand
Sample Complexity Bounds for 1-bit Compressive Sensing and Binary Stable Embeddings with Generative Priors
TLDR
It is demonstrated that the Binary $\epsilon$-Stable Embedding property, which characterizes the robustness of the reconstruction to measurement errors and noise, also holds for 1-bit compressive sensing with Lipschitz continuous generative models with sufficiently many Gaussian measurements. Expand
On the Power of Localized Perceptron for Label-Optimal Learning of Halfspaces with Adversarial Noise
  • Jie Shen
  • Computer Science, Mathematics
  • ICML
  • 2021
TLDR
Under the agnostic model where no assumption is made on the noise rate $\nu, the active learner achieves an error rate of $O(OPT) + \epsilon$ with the same running time and label and sample complexity, where $OPT$ is the best possible error rate achievable by any homogeneous halfspace. Expand
Robust 1-bit Compressive Sensing with Partial Gaussian Circulant Matrices and Generative Priors
  • Zhaoqiang Liu, Subhroshekhar Ghosh, Jun Han, J. Scarlett
  • Computer Science, Mathematics
  • ArXiv
  • 2021
TLDR
This paper provides recovery guarantees for a correlation-based optimization algorithm for robust 1-bit compressive sensing with randomly signed partial Gaussian circulant matrices and generative models and makes use of a practical iterative algorithm. Expand
Towards Sample-Optimal Compressive Phase Retrieval with Sparse and Generative Priors
TLDR
This paper provides recovery guarantees with order-optimal sample complexity bounds for phase retrieval with generative priors and proposes a practical spectral initialization method motivated by these findings, and experimentally observe significant performance gains over various existing spectral initialization methods of sparse phase retrieval. Expand
Estimating covariance and precision matrices along subspaces
We study the accuracy of estimating the covariance and the precision matrix of a $D$-variate sub-Gaussian distribution along a prescribed subspace or direction using the finite sample covariance. OurExpand

References

SHOWING 1-10 OF 49 REFERENCES
Compressed Sensing using Generative Models
TLDR
This work shows how to achieve guarantees similar to standard compressed sensing but without employing sparsity at all, and proves that, if G is L-Lipschitz, then roughly O(k log L) random Gaussian measurements suffice for an l2/l2 recovery guarantee. Expand
Lower Bounds for Compressed Sensing with Generative Models
TLDR
A ReLU-based neural network is constructed that generalizes sparsity as a representation of structure in generative models, and it is shown that generative model generalization holds even for the more relaxed goal of \emph{nonuniform} recovery. Expand
The Generalized Lasso for Sub-Gaussian Measurements With Dithered Quantization
TLDR
The theoretical results, shed light on the appropriate choice of the range of values of the dithering signal and accurately capture the error dependence on the problem parameters and shows that the G-Lasso with one-bit uniformly dithered measurements leads to only a logarithmic rate loss compared to the full-precision measurements. Expand
Robust 1-bit Compressed Sensing and Sparse Logistic Regression: A Convex Programming Approach
TLDR
It is shown that an -sparse signal in can be accurately estimated from m = O(s log(n/s) single-bit measurements using a simple convex program, and the same conveX program works for virtually all generalized linear models, in which the link function may be unknown. Expand
LASSO with Non-linear Measurements is Equivalent to One With Linear Measurements
TLDR
The estimation performance of the Generalized LASSO with non-linear measurements is asymptotically the same as one whose measurements are linear yi = μaiTx0 + σzi, with μ = Eγg(γ) and σ2 = E(g(β) - μγ)2, and, γ standard normal. Expand
On the statistical rate of nonlinear recovery in generative models with heavy-tailed data
TLDR
This paper considers the scenario where the measurements are non-Gaussian, subject to possibly unknown nonlinear transformations and the responses are heavy-tailed, and proposes new estimators via score functions based on Stein’s identity and proves the sample size bound of m = O(k" 2 log(L/")) achieving an " error in the form of exponential concentration inequalities. Expand
On the Power of Compressed Sensing with Generative Models
TLDR
It is shown that generative models generalize sparsity as a representation of structure by constructing a ReLU-based neural network with 2 hidden layers and O(n) activations per layer whose range is precisely the set of all k-sparse vectors. Expand
The Generalized Lasso With Non-Linear Observations
TLDR
The first theoretical accuracy guarantee for 1-b compressed sensing with unknown covariance matrix of the measurement vectors is given, and the single-index model of non-linearity is considered, allowing the non- linearity to be discontinuous, not one-to-one and even unknown. Expand
Fast and Reliable Parameter Estimation from Nonlinear Observations
TLDR
A framework for characterizing time-data tradeoffs for a variety of parameter estimation algorithms when the nonlinear function f is unknown is developed and it is shown that a projected gradient descent scheme converges at a linear rate to a reliable solution with a near minimal number of samples. Expand
Simple Bounds for Noisy Linear Inverse Problems with Exact Side Information
TLDR
It is shown that, if precise information about the value f(x_0) or the l_2-norm of the noise is available, one can do a particularly good job at estimation, and the reconstruction error becomes proportional to the “sparsity” of the signal rather than to the ambient dimension of the Noise vector. Expand
...
1
2
3
4
5
...