• Corpus ID: 245986698

Parallel Neural Local Lossless Compression

@article{Zhang2022ParallelNL,
  title={Parallel Neural Local Lossless Compression},
  author={Mingtian Zhang and James Townsend and Ning Kang and David Barber},
  journal={ArXiv},
  year={2022},
  volume={abs/2201.05213}
}
The recently proposed Neural Local Lossless Compression (NeLLoC) [27], which is based on a local autoregressive model, has achieved state-of-the-art (SOTA) out-of-distribution (OOD) generalization performance in the image compression task. In addition to the encouragement of OOD generalization, the local model also allows parallel inference in the decoding stage. In this paper, we propose two parallelization schemes for local autoregressive models. We discuss the practicali-ties of implementing… 

Figures and Tables from this paper

A Learned Pixel-by-Pixel Lossless Image Compression Method with 59K Parameters and Parallel Decoding

A learned compression system that can achieve state-of-the-art lossless compression performance but uses only 59K parameters, which is more than 30x less than other learned systems proposed recently in the literature is presented.

Deep Lossy Plus Residual Coding for Lossless and Near-lossless Image Compression

Experimental results demonstrate that the DLPR coding system achieves both the state-of-the-art lossless and near-lossless image compression performance with competitive coding speed.

An Artificial Neural Network Based Pixel-by-Pixel Lossless Image Compression Method

  • Sinem GümüşFatih Kamisli
  • Computer Science, Environmental Science
    2022 30th Signal Processing and Communications Applications Conference (SIU)
  • 2022
A deep learning based architecture, which utilizes masked convolutions, to model probability distributions of pixels and also presents a method to improve the parallelization of the algorithms, which is competitive and is compared to both state-of-the art traditional and deep learningbased methods.

Improving VAE-based Representation Learning

It is shown that by using a decoder that prefers to learn local features, the remaining global features can be well captured by the latent, which significantly improves performance of a downstream classi-cation task.

Generalization Gap in Amortized Inference

This work proposes a new training objective, inspired by the classic wake-sleep algorithm, to improve the generalizations properties of amortized inference and demonstrates how it can improve generalization performance in the context of image modeling and lossless compression.

References

SHOWING 1-10 OF 30 REFERENCES

On the Out-of-distribution Generalization of Probabilistic Image Modelling

This work proposes a Local Autoregressive model that exclusively models local image features towards improving OOD performance and employs the model to build a new lossless image compressor: NeLLoC (Neural Local Lossless Compressor) and report state-of-the-art compression rates and model size.

Hierarchical VAEs Know What They Don't Know

This work develops a fast, scalable and fully unsupervised likelihoodratio score for OOD detection that requires data to be in-distribution across all feature-levels, and benchmarks the method on a vast set of data and model combinations and achieves state-of-the-art results.

Understanding Anomaly Detection with Deep Invertible Networks through Hierarchies of Distributions and Features

Two methods are proposed, first, using the log likelihood ratios of two identical models, one trained on the in-distribution data and the other on a more general distribution of images, which achieve strong anomaly detection performance in the unsupervised setting, reaching comparable performance as state-of-the-art classifier-based methods in the supervised setting.

PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture Likelihood and Other Modifications

This work discusses the implementation of PixelCNNs, a recently proposed class of powerful generative models with tractable likelihood that contains a number of modifications to the original model that both simplify its structure and improve its performance.

Pixel Recurrent Neural Networks

A deep neural network is presented that sequentially predicts the pixels in an image along the two spatial dimensions and encodes the complete set of dependencies in the image to achieve log-likelihood scores on natural images that are considerably better than the previous state of the art.

Out-of-Distribution Detection with Class Ratio Estimation

This work proposes to unify density ratio based methods under a novel framework that builds energy-based models and employs differing base distributions and proposes to directly estimate the density ratio of a data sample through class ratio estimation.

Improving VAE-based Representation Learning

It is shown that by using a decoder that prefers to learn local features, the remaining global features can be well captured by the latent, which significantly improves performance of a downstream classi-cation task.

iFlow: Numerically Invertible Flows for Efficient Lossless Compression via a Uniform Coder

iFlow, a new method for achieving efficient lossless compression using normalizing flows which achieves state-of-the-art compression ratios and is 5× quicker than other high-performance schemes, is introduced.

Variational Diffusion Models

A family of diffusion-based generative models that obtain state-of-the-art likelihoods on standard image density estimation benchmarks, outperforming autoregressive models that have dominated these benchmarks for many years, with often faster optimization.

Improving Lossless Compression Rates via Monte Carlo Bits-Back Coding

This paper shows how to remove the gap in the bitrate equal to the KL divergence between the approximate posterior and the true posterior by deriving bits-back coding algorithms from tighter variational bounds from extended space representations of Monte Carlo estimators of the marginal likelihood.