DCT quantization matrices visually optimized for individual images

@inproceedings{Watson1993DCTQM,
  title={DCT quantization matrices visually optimized for individual images},
  author={Andrew B. Watson},
  booktitle={Electronic Imaging},
  year={1993}
}
  • A. Watson
  • Published in Electronic Imaging 8 September 1993
  • Computer Science
Several image compression standards (JPEG, MPEG, H.261) are based on the Discrete Cosine Transform (DCT). These standards do not specify the actual DCT quantization matrix. Ahumada & Peterson and Peterson, Ahumada & Watson provide mathematical formulae to compute a perceptually lossless quantization matrix. Here I show how to compute a matrix that is optimized for a particular image. The method treats each DCT coefficient as an approximation to the local response of a visual `channel.' For a… 
DCTune: A TECHNIQUE FOR VISUAL OPTIMIZATION OF DCT QUANTIZATION MATRICES FOR INDIVIDUAL IMAGES.
TLDR
The method treats each DCT coefficient as an approximation to the local response of a visual "channel" and estimates the quantization matrix for a particular image that yields minimum bit rate for a given total perceptual error, or minimum perceptual error for agiven bit rate.
Perceptual optimization of DCT color quantization matrices
  • A. Watson
  • Computer Science
    Proceedings of 1st International Conference on Image Processing
  • 1994
TLDR
A method is described, called DCTune, for the design of color quantization matrices that is based on a model of the visibility of quantization artifacts that describes the artifact visibility as a function of DCT frequency, color channel, and display resolution and brightness.
Perceptual adaptive JPEG coding
  • R. Rosenholtz, A. Watson
  • Computer Science
    Proceedings of 3rd IEEE International Conference on Image Processing
  • 1996
TLDR
This work compute the perceptual error for each block based upon the DCT quantization error adjusted according to the contrast sensitivity, light adaptation, and contrast masking, and pick the set of multipliers which yield maximally flat perceptual error over the blocks of the image.
Optimization of JPEG color image coding using a human visual system model
TLDR
A new model that can be used in the perceptual optimization of standard color image coding algorithms (JPEG/MPEG) and is possible to calculate a perceptually weighted mean squared error directly in the DCT color domain, although the model itself is based on a directional frequency band decomposition.
Perceptual Image Coding with Discrete Cosine Transform
TLDR
This book first introduces classic as well as recent computational models for just-noticeable-difference (JND) applications and provides a comparative analysis of several perceptual image coders that are based on DCT, which are compatible with the highly popular and widely adopted JPEG standard.
Joint thresholding and quantizer selection for transform image coding: entropy-constrained analysis and applications to baseline JPEG
TLDR
An image-adaptive JPEG encoding algorithm that jointly optimizes quantizer selection, coefficient "thresholding", and Huffman coding within a rate-distortion (R-D) framework is developed.
A perceptually optimized JPEG-LS coder for color images
TLDR
In this paper, the JPEG-LS coder in the near-lossless compression mode is perceptually optimized by making coding errors imperceptible or minimally noticeable, and the performance of the perceptually optimize coder is superior to that of the un-optimized coder for achieving the same visual quality.
Locally adaptive perceptual image coding
TLDR
A perceptual-based image coder, which discriminates between image components based on their perceptual relevance for achieving increased performance in terms of quality and bit rate, which is based on a locally adaptive perceptual quantization scheme for compressing the visual data.
Adaptive image coding with perceptual distortion control
This paper presents a discrete cosine transform (DCT)-based locally adaptive perceptual image coder, which discriminates between image components based on their perceptual relevance for achieving
Locally-adaptive perceptual quantization without side information for DCT coefficients
  • I. Hontsch, Lina Karam
  • Computer Science
    Conference Record of the Thirty-First Asilomar Conference on Signals, Systems and Computers (Cat. No.97CB36136)
  • 1997
TLDR
This work demonstrates that, for natural images with the same perceptual quality, the first-order entropy of the quantizer outputs can be reduced by 15 to 40 percent when optimal locally-adaptive perceptual quantization is used.
...
...

References

SHOWING 1-10 OF 26 REFERENCES
Quantization of color image components in the DCT domain
TLDR
Rather than studying perceptually lossless compression, research must carry out research to determine what types of lossy transformations are least disturbing to the human observer.
Luminance-model-based DCT quantization for color image compression
A model is developed to approximate visibility thresholds for discrete cosine transform (DCT) coefficient quantization error based on the peak-to-peak luminance of the error image. Experimentally
The JPEG still picture compression standard
TLDR
The Baseline method has been by far the most widely implemented JPEG method to date, and is sufficient in its own right for a large number of applications.
Improved detection model for DCT coefficient quantization
A detection model is developed to predict visibility thresholds for discrete cosine transform coefficient quantization error, based on the luminance and chrominance of the error. The model is an
JPEG: Still Image Data Compression Standard
TLDR
This chapter discusses JPEG Syntax and Data Organization, the history of JPEG, and some of the aspects of the Human Visual Systems that make up JPEG.
Spatial Modulation Transfer in the Human Eye
The contrast sensitivity of the human eye for sinusoidal illuminance changes was measured as a function of spatial frequency, for monochromatic light with wavelengths of 450, 525, and 650 nm. At each
Contrast masking in human vision.
TLDR
A masking model is presented that encompasses contrast detection, discrimination, and masking phenomena that includes a linear spatial frequency filter, a nonlinear transducer, and a process of spatial pooling that acts at low contrasts only.
...
...