Message-Passing Estimation from Quantized Samples

@article{Kamilov2011MessagePassingEF,
  title={Message-Passing Estimation from Quantized Samples},
  author={Ulugbek S. Kamilov and Vivek K Goyal and Sundeep Rangan},
  journal={ArXiv},
  year={2011},
  volume={abs/1105.6368}
}
Estimation of a vector from quantized linear measurements is a common problem for which simple linear techniques are suboptimal—sometimes greatly so. This paper develops generalized approximate message passing (GAMP) alg orithms for minimum mean-squared error estimation of a random vector from quantized linear measurements, notably allowi ng the linear expansion to be overcomplete or undercomplete and th e scalar quantization to be regular or non-regular. GAMP is a recently-developed class of… 

Figures from this paper

Generalized approximate message passing estimation from quantized samples
  • U. Kamilov, Vivek K Goyal, S. Rangan
  • Computer Science
    2011 4th IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP)
  • 2011
TLDR
This paper summarizes the development of generalized approximate message passing (GAMP) algorithms for minimum mean-squared error estimation of a random vector from quantized linear measurements, notably allowing the linear expansion to be overcomplete or undercomplete and the scalar quantization to be regular or non-regular.
Generalized approximate message passing for estimation with random linear mixing
  • S. Rangan
  • Computer Science
    2011 IEEE International Symposium on Information Theory Proceedings
  • 2011
TLDR
G-AMP incorporates general measurement channels and shows that the asymptotic behavior of the G-AMP algorithm under large i.i.d. measurement channels is similar to the AWGN output channel case, and Gaussian transform matrices is described by a simple set of state evolution (SE) equations.
Optimal quantization for compressive sensing under message passing reconstruction
TLDR
This work considers the optimal quantization of compressive sensing measurements along with estimation from quantized samples using generalized approximate message passing using GAMP and designs mean-square optimal scalar quantizers for GAMP signal reconstruction and empirically demonstrates the superior error performance of the resulting quantizers.
Quantized Compressive Sensing with RIP Matrices: The Benefit of Dithering
TLDR
This work shows that, for a scalar and uniform quantization, provided that a uniform random vector, or "random dithering", is added to the compressive measurements of a low-complexity signal, a large class of random matrix constructions known to respect the restricted isometry property (RIP) are made "compatible" with this quantizer.
Information-Theoretically Optimal Compressed Sensing via Spatial Coupling and Approximate Message Passing
TLDR
An approximate message passing (AMP) algorithm is used and a rigorous proof is given that this approach is successful as soon as the undersampling rate δ exceeds the (upper) Rényi information dimension of the signal, d̅(pX).
Efficient reconstruction of sparse vectors from quantized observations
TLDR
This work presents an efficient message-passing-like iterative algorithm for estimating a vector from quantized linear noisy observations that does not require any prior information about the sparse input and can be applied for arbitrary quantizer resolution.
Accurate Prediction of Phase Transitions in Compressed Sensing via a Connection to Minimax Denoising
TLDR
This paper presents a formula that characterizes the allowed undersampling of generalized sparse objects, and proves that this formula follows from state evolution and present numerical results validating it in a wide range of settings.
One-Bit Measurements With Adaptive Thresholds
We introduce a new method for adaptive one-bit quantization of linear measurements and propose an algorithm for the recovery of signals based on generalized approximate message passing (GAMP). Our
MMSE Estimation of Sparse Lévy Processes
TLDR
This work investigates a stochastic signal-processing framework for signals with sparse derivatives, where the samples of a Lévy process are corrupted by noise, and proposes a novel non-iterative implementation of the MMSE estimator based on the belief-propagation (BP) algorithm performed in the Fourier domain.
State Evolution for General Approximate Message Passing Algorithms, with Applications to Spatial Coupling
TLDR
This work covers the analysis of generalized AMP, introduced by Rangan, and of AMP reconstruction in compressed sensing with spatially coupled sensing matrices, and the proof technique builds on the one of [BM11], while simplifying and generalizing several steps.
...
...

References

SHOWING 1-10 OF 69 REFERENCES
Generalized approximate message passing for estimation with random linear mixing
  • S. Rangan
  • Computer Science
    2011 IEEE International Symposium on Information Theory Proceedings
  • 2011
TLDR
G-AMP incorporates general measurement channels and shows that the asymptotic behavior of the G-AMP algorithm under large i.i.d. measurement channels is similar to the AWGN output channel case, and Gaussian transform matrices is described by a simple set of state evolution (SE) equations.
Asymptotic Mean-Square Optimality of Belief Propagation for Sparse Linear Systems
TLDR
The mean squared error of estimating each symbol of the input vector using BP is proved to be equal to the MMSE of estimating the same symbol through a scalar Gaussian channel with some degradation in the signal-to-noise ratio (SNR).
Random Sparse Linear Systems Observed Via Arbitrary Channels: A Decoupling Principle
TLDR
This paper extends the authors' previous decoupling result for Gaussian channels to arbitrary channels, which was based on an earlier work of Montanari and Tse, and a rigorous justification is provided for the generalization of some results obtained via statical physics methods.
Universal Rate-Efficient Scalar Quantization
  • P. Boufounos
  • Computer Science
    IEEE Transactions on Information Theory
  • 2012
TLDR
It is demonstrated that it is possible to reduce the quantization error by incorporating side information on the acquired signal, such as sparse signal models or signal similarity with known signals, and establish a relationship between quantization performance and the Kolmogorov entropy of the signal model.
Message-passing algorithms for compressed sensing
TLDR
A simple costless modification to iterative thresholding is introduced making the sparsity–undersampling tradeoff of the new algorithms equivalent to that of the corresponding convex optimization procedures, inspired by belief propagation in graphical models.
On the whiteness of high-resolution quantization errors
TLDR
It is shown that the quantization errors resulting from independent quantizations of dependent real random variables become asymptotically uncorrelated if the joint Fisher information under translation of the two variables is finite and the quantification cells shrink uniformly as the distortion tends to zero.
Hybrid Approximate Message Passing with Applications to Structured Sparsity
TLDR
A systematic framework for incorporating Gaussian and quadratic approximations of message passing algorithms on graphs in general graphical models is presented, which can yield significantly simpler implementations of sum-product and max-sum loopy belief propagation.
Estimation with random linear mixing, belief propagation and compressed sensing
  • S. Rangan
  • Computer Science
    2010 44th Annual Conference on Information Sciences and Systems (CISS)
  • 2010
TLDR
This paper presents detailed equations for implementing relaxed BP for general channels and shows that the relaxed BP has an identical asymptotic large sparse limit behavior as standard BP as predicted by the Guo and Wang's state evolution (SE) equations.
On the Rate-Distortion Performance of Compressed Sensing
TLDR
It is shown that random measurements induce an additive logarithmic rate penalty, i.e., at high rates the performance with rate R + O(log R) and random measurements is equal to the performanceWith rate R and deterministic measurements matched to the source.
...
...