Learn More
Recent results in compressive sampling have shown that sparse signals can be recovered from a small number of random measurements. This property raises the question of whether random measurements can provide an efficient representation of sparse signals in an information-theoretic sense. Through both theoretical and experimental results, we show that(More)
Predictive quantization is a simple and effective method for encoding slowly-varying signals that is widely used in speech and audio coding. It has been known qualitatively that leaving correlation in the encoded samples can lead to improved estimation at the decoder when encoded samples are subject to erasure. However, performance estimation in this case(More)
We consider the problem of estimating the parameters of a signal when the sampling instances are perturbed by signal-independent timing noise. The classical techniques consider timing noise to induce a signal-independent additive white Gaussian noise term on the sample values. We reject this simplification of the problem and give alternative methodologies.(More)
Jump linear systems are linear state-space systems with random time variations driven by a finite Markov chain. These models are widely used in nonlinear control, and more recently, in the study of communication over lossy channels. This paper considers a general jump linear estimation problem of estimating an unknown signal from an observed signal, where(More)
There are several applications in information transfer and storage where the order of source letters is irrelevant at the destination. For these source-destination pairs, multiset communication rather than the more difficult task of sequence communication may be performed. In this work, we study universal multiset communication. For classes of(More)
If a signal x is known to have a sparse representation with respect to a frame, the signal can be estimated from a noise-corrupted observation y by finding the best sparse approximation to y. This paper analyzes the mean squared error (MSE) of this denoising scheme and the probability that the estimate has the same sparsity pattern as the original signal.(More)
In traditional modes of lossy compression, attaining low distortion letter-by-letter on a vector of source letters X<sub>1</sub> <sup>N</sup>=(X<sub>1</sub>, X<sub>2</sub>,..., X<sub>N</sub>)isinRopf<sup>N</sup> is the implicit aim. We consider here instead the goal of estimating at the destination a function G(X<sub>1</sub> <sup>N</sup>) of the source data(More)
It has been proven that if the solution exists to an inhomogeneous biharmonic equation in the plane where the values of the solution, the normal derivative of the solution, and the Laplacian of the solution are prescribed on the boundary, then the domain is a disk. This result has been extended to N -dimensions by the Serrin reflection method. Here we(More)
This paper introduces a family of integer-to-integer approximations to the Cartesian-to-polar coordinate transformation and analyzes its application to lossy compression. A high-rate analysis is provided for an encoder that first uniformly scalar quantizes, then transforms to "integer polar coordinates," and finally separately entropy codes angle and(More)
  • 1