# Cluster-based probability model applied to image restoration and compression

@article{Popat1994ClusterbasedPM, title={Cluster-based probability model applied to image restoration and compression}, author={Ashok Popat and Rosalind W. Picard}, journal={Proceedings of ICASSP '94. IEEE International Conference on Acoustics, Speech and Signal Processing}, year={1994}, volume={v}, pages={V/381-V/384 vol.5} }

The performance of a statistical signal processing system is determined in large part by the accuracy of the probabilistic model it employs. Accurate modeling often requires working in several dimensions, but doing so can introduce dimensionality-related difficulties. A previously introduced model circumvents some of these difficulties while maintaining accuracy sufficient to account for much of the high-order, nonlinear statistical interdependence of samples. Properties of this model are…

## 25 Citations

Exaggerated consensus in lossless image compression

- Computer ScienceProceedings of 1st International Conference on Image Processing
- 1994

This work considers a means of adaptively combining several low-order conditional probability distributions into a single higher-order estimate, based on their degree of agreement, in the context of image compression.

Conjoint probabilistic subband modeling

- Computer Science
- 1997

A new approach to high-order-conditional probability density estimation is developed, based on a partitioning of conditioning space via decision trees, which shows that the appropriate tradeoff between spatial and spectral localization in linear preprocessing shifts towards greater spatial localization when subbands are processed in a way that exploits interdependence.

Spectral Classified Vector Quantization (SCVQ) for Multispectral Images

- Computer Science
- 2002

The approach proposed in this paper aims at combining a compression and a classification methodology into a single scheme, in which visual distortion and classification accuracy can be balanced a- priori according to the requirements of the target application.

Combining Image Compression and Classification Using Vector Quantization

- Computer ScienceIEEE Trans. Pattern Anal. Mach. Intell.
- 1995

A variety of examples demonstrate that the proposed method can provide classification ability close to or superior to learning VQ while simultaneously providing superior compression performance.

Lossy Compression, Classification, and Regression

- Computer Science
- 1999

The traditional goal of data compression is to speed transmission or to minimize storage requirements of a signal while preserving the best possible quality of reproduction. This is usually…

Novel cluster-based probability model for texture synthesis, classification, and compression

- Computer ScienceOther Conferences
- 1993

A new probabilistic modeling technique for high-dimensional vector sources is presented, and its application to the problems of texture synthesis, classification, and compression is considered.

Enhancement of lossy compressed images by modeling with Bernstein polynomials

- Computer ScienceProceedings. International Conference on Image Processing
- 2002

A non-iterative post-processing enhancement technique that mitigates the quantization noise while preserving strong edges and textures and illustrates the significant visual improvement achieved with a computational complexity of O(n).

On entropy-constrained vector quantization using gaussian mixture models

- Computer ScienceIEEE Transactions on Communications
- 2008

A flexible and low-complexity entropy-constrained vector quantizer (ECVQ) scheme based on Gaussian mixture models, lattice quantization, and arithmetic coding is presented and has a comparable performance to at rates relevant for speech coding with lower computational complexity.

On Entropy-Constrained Vector Quantization using

- Computer Science
- 2008

A flexible and low-complexity entropy-constrained vector quantizer (ECVQ) scheme based on Gaussian mixture models, lattice quantization, and arithmetic coding is presented, which has a comparable performance to (1) at rates relevant for speech coding with lower computational complexity.

Bayes risk weighted vector quantization with posterior estimation for image compression and classification

- Computer ScienceIEEE Trans. Image Process.
- 1996

This work investigates several VQ-based algorithms that seek to minimize both the distortion of compressed images and errors in classifying their pixel blocks and introduces a tree-structured posterior estimator to produce the class posterior probabilities required for the Bayes risk computation in this design.

## References

SHOWING 1-10 OF 27 REFERENCES

High-resolution quantization theory and the vector quantizer advantage

- Computer ScienceIEEE Trans. Inf. Theory
- 1989

The authors consider how much performance advantage a fixed-dimensional vector quantizer can gain over a scalar quantizer. They collect several results from high-resolution or asymptotic (in rate)…

Novel cluster-based probability model for texture synthesis, classification, and compression

- Computer ScienceOther Conferences
- 1993

A new probabilistic modeling technique for high-dimensional vector sources is presented, and its application to the problems of texture synthesis, classification, and compression is considered.

A practical approach to fractal-based image compression

- Computer Science[1991] Proceedings. Data Compression Conference
- 1991

A technique for image compression is based on a very simple type of iterative fractal, used to decompose an image into bands containing information from different scales, and used as the basis for a predictive coder.

Image coding using lattice vector quantization of wavelet coefficients

- Computer Science[Proceedings] ICASSP 91: 1991 International Conference on Acoustics, Speech, and Signal Processing
- 1991

The purpose of this work is to propose a new scheme for vectors quantization of wavelet coefficients based on lattice vector quantization, and the application of the D/sub 4/, E/sub 8/ and Barnes-Wall Lambda /sub 16/ lattices is investigated.

Compression of Black-White Images with Arithmetic Coding

- Computer Science
- 1981

A new approach for black and white image compression is described, with which the eight CCITT test documents can be compressed in a lossless manner 20-30 percent better than with the best existing…

Optimal nonlinear interpolative vector quantization

- Computer ScienceIEEE Trans. Commun.
- 1990

The range of applicability of nonlinear interpolative vector quantization is illustrated with examples in which optimal nonlinear estimation from quantized data is needed for efficient signal compression.

Lattice vector quantization for image coding

- Computer ScienceInternational Conference on Acoustics, Speech, and Signal Processing,
- 1989

The authors investigate the application of several lattice vector quantizers to the quantization of 2-D DCT (discrete cosine transform) coefficients at rates less than 1.0 bit/pixel and find the Z/sup 16/ lattice outperforms the simplified LBG vector quantizer in both SNR and reconstructed image quality.

Vector quantization for entropy coding of image subbands

- Computer Science, EngineeringIEEE Trans. Image Process.
- 1992

The authors show that full-search entropy-constrained vector quantization of image subbands results in the best performance, but is computationally expensive.

An Algorithm for Vector Quantizer Design

- Computer ScienceIEEE Trans. Commun.
- 1980

An efficient and intuitive algorithm is presented for the design of vector quantizers based either on a known probabilistic model or on a long training sequence of data. The basic properties of the…

Fast quantizing and decoding and algorithms for lattice quantizers and codes

- Computer ScienceIEEE Trans. Inf. Theory
- 1982

A very fast algorithm is given for finding the closest lattice point to an arbitrary point if these lattices are used for vector quantizing of uniformly distributed data.