Context-Tree-Based Lossy Compression and Its Application to CSI Representation

@article{Miyamoto2022ContextTreeBasedLC,
  title={Context-Tree-Based Lossy Compression and Its Application to CSI Representation},
  author={Henrique K. Miyamoto and Sheng Yang},
  journal={ArXiv},
  year={2022},
  volume={abs/2110.14748}
}
—We propose novel compression algorithms for time- varying channel state information (CSI) in wireless communica-tions. The proposed scheme combines (lossy) vector quantisation and (lossless) compression. First, the new vector quantisation technique is based on a class of parametrised companders applied on each component of the normalised CSI vector. Our algorithm chooses a suitable compander in an intuitively simple way whenever empirical data are available. Then, the sequences of quantisation… 

References

SHOWING 1-10 OF 31 REFERENCES
Superior Guarantees for Sequential Prediction and Lossless Compression via Alphabet Decomposition
TLDR
Best case bounds for the learning rate of a known prediction method that is based on hierarchical applications of binary context tree weighting (CTW) predictors are presented, substantiates the efficiency of this hierarchical method and provides a compelling explanation for its practical success.
Compression of individual sequences via variable-rate coding
TLDR
The proposed concept of compressibility is shown to play a role analogous to that of entropy in classical information theory where one deals with probabilistic ensembles of sequences rather than with individual sequences.
Deep Learning for Massive MIMO CSI Feedback
TLDR
CsiNet is developed, a novel CSI sensing and recovery mechanism that learns to effectively use channel structure from training samples that can recover CSI with significantly improved reconstruction quality compared with existing compressive sensing (CS)-based methods.
Distributed Deep Convolutional Compression for Massive MIMO CSI Feedback
TLDR
A deep learning (DL)-based CSI compression scheme, called DeepCMC, composed of convolutional layers followed by quantization and entropy coding blocks, which outperforms the state of the art CSI compression schemes in terms of the reconstruction quality of CSI for the same compression rate.
Cube-Split: Structured Quantizers on the Grassmannian of Lines
TLDR
A new quantization scheme for real and complex Grassmannian sources based on a geometric construction of a collection of bent grids defined from an initial mesh on the unit-norm sphere, suitable for high-resolutions, real-time applications such as channel state feedback in massive multiple-input multiple-output (MIMO) wireless communication systems.
The context-tree weighting method: basic properties
TLDR
The authors derive a natural upper bound on the cumulative redundancy of the method for individual sequences that shows that the proposed context-tree weighting procedure is optimal in the sense that it achieves the Rissanen (1984) lower bound.
Deep Task-Based Quantization †
TLDR
The results indicate that, in a MIMO channel estimation setup, the proposed deep task-bask quantizer is capable of approaching the optimal performance limits dictated by indirect rate-distortion theory, achievable using vector quantizers and requiring complete knowledge of the underlying statistical model.
Convolutional Neural Network-Based Multiple-Rate Compressive Sensing for Massive MIMO CSI Feedback: Design, Simulation, and Analysis
TLDR
A multiple-rate compressive sensing neural network framework to compress and quantize the CSI, which not only improves reconstruction accuracy but also decreases storage space at the UE, thus enhancing the system feasibility.
Sequential Weighting Algorithms for Multi-Alphabet Sources ∗
TLDR
First a ‘natural’ use of the weighting principle on an appropriate class of sources is described and next a binary derived model where an appropriate estimator is introduced where the model is used to estimate the number of letters used in the states.
Maximum a posteriori probability tree models
TLDR
It is stressed again that the a priori distribution over all tree models that is mainly considered in the basic CTW paper is not the only one that can be used in context-tree weighting.
...
1
2
3
4
...