• Corpus ID: 249626032

COIN++: Neural Compression Across Modalities

@inproceedings{Dupont2022COINNC,
  title={COIN++: Neural Compression Across Modalities},
  author={Emilien Dupont and Hrushikesh Loya and Milad Alizadeh and Adam Goli'nski and Yee Whye Teh and A. Doucet},
  year={2022}
}
Neural compression algorithms are typically based on autoencoders that require specialized encoder and decoder architectures for different data modalities. In this paper, we propose COIN++, a neural compression framework that seamlessly handles a wide range of data modalities. Our approach is based on converting data to implicit neural representations, i.e. neural functions that map coordinates (such as pixel locations) to features (such as RGB values). Then, instead of storing the weights of… 

SINCO: A Novel structural regularizer for image compression using implicit neural representations

It is shown that the combination of the traditional image-consistency loss and the structural regularizer enables SINCO to learn an INR that can better preserve desired image features.

References

SHOWING 1-10 OF 79 REFERENCES

COIN: COmpression with Implicit Neural representations

A new simple approach for image compression: instead of storing the RGB values for each pixel of an image, the weights of a neural network overfitted to the image are stored, and this approach outperforms JPEG at low bit-rates, even without entropy coding or learning a distribution over weights.

Implicit Neural Representations for Image Compression

The INR-based compression algorithm, meta-learning combined with SIREN and positional encodings, outperforms JPEG2000 and Rate-Distortion Autoencoders on Kodak with 2x reduced dimensionality for the first time and closes the gap on full resolution images.

Implicit Neural Video Compression

The method, which is called implicit pixel flow (IPF), offers several simplifications over established neural video codecs: it does not require the receiver to have access to a pretrained neural network, does not use expensive interpolation-based warping operations, anddoes not require a separate training dataset.

Enhanced Invertible Encoding for Learned Image Compression

This paper proposes an enhanced Invertible Encoding Network with invertible neural networks (INNs) to largely mitigate the information loss problem for better compression, and shows that this method outperforms the existing learned image compression methods and compression standards, including VVC (VTM 12.1), especially for high-resolution images.

Variational image compression with a scale hyperprior

It is demonstrated that this model leads to state-of-the-art image compression when measuring visual quality using the popular MS-SSIM index, and yields rate-distortion performance surpassing published ANN-based methods when evaluated using a more traditional metric based on squared error (PSNR).

Neural Data-Dependent Transform for Learned Image Compression

This is the first attempt to build a neural data-dependent transform and introduce a continuous online mode decision mechanism to jointly optimize the coding efficiency for each individual image.

Lossy Image Compression with Compressive Autoencoders

It is shown that minimal changes to the loss are sufficient to train deep autoencoders competitive with JPEG 2000 and outperforming recently proposed approaches based on RNNs, and furthermore computationally efficient thanks to a sub-pixel architecture, which makes it suitable for high-resolution images.

Feedback Recurrent Autoencoder for Video Compression

This work proposes a new network architecture, based on common and well studied components, for learned video compression operating in low latency mode, and yields state of the art MS-SSIM/rate performance on the high-resolution UVG dataset.

Conditional Probability Models for Deep Image Compression

This paper proposes a new technique to navigate the rate-distortion trade-off for an image compression auto-encoder by using a context model: A 3D-CNN which learns a conditional probability model of the latent distribution of the auto- Encoder.

A Survey of Model Compression and Acceleration for Deep Neural Networks

This paper survey the recent advanced techniques for compacting and accelerating CNNs model developed, roughly categorized into four schemes: parameter pruning and sharing, low-rank factorization, transferred/compact convolutional filters, and knowledge distillation.
...