To code, or not to code: lossy source-channel communication revisited

@article{Gastpar2003ToCO,
  title={To code, or not to code: lossy source-channel communication revisited},
  author={M. Gastpar and B. Rimoldi and M. Vetterli},
  journal={IEEE Trans. Inf. Theory},
  year={2003},
  volume={49},
  pages={1147-1158}
}
What makes a source-channel communication system optimal? It is shown that in order to achieve an optimal cost-distortion tradeoff, the source and the channel have to be matched in a probabilistic sense. The match (or lack of it) involves the source distribution, the distortion measure, the channel conditional distribution, and the channel input cost function. Closed-form necessary and sufficient expressions relating the above entities are given. This generalizes both the separation-based… Expand
To code or not to code: Revisited
TLDR
The dilemma of whether one should or should not code when operating under delay constraints is revisited and symbol-by-symbol transmission, though asymptotically suboptimal, might outperform not only separate source-channel coding but also the best known random-coding joint source- channel coding achievability bound in the finite blocklength regime. Expand
Strategies for Delay-Limited Source-Channel Coding
In point-to-point source-channel communication with a fidelity criterion and a transmission cost constraint, the region of achievable cost and fidelity pairs is completely characterized by Shannon'sExpand
Analog Matching of Colored Sources to Colored Channels
TLDR
It is shown that by combining prediction and modulo-lattice arithmetic, it can match any stationary Gaussian source to any inter-symbol interference, colored-noise Gaussian channel, hence Shannon's optimum attainable performance R(D) = C. Expand
Lossless Transmission of Correlated Sources over a Multiple Access Channel with Side Information
TLDR
A source channel separation theorem is proved that there is no loss in performance in first applying distributed source coding where each encoder compresses its source conditioned on the side information at the receiver, and then applying an optimal multiple access channel code with independent codebooks. Expand
Information spectrum approach to the source channel separation theorem
  • Nir Elkayam, M. Feder
  • Mathematics, Computer Science
  • 2014 IEEE International Symposium on Information Theory
  • 2014
TLDR
This work proves a stronger claim where the source is general, satisfying only a “sphere packing optimality” feature, and the channel is completely general, and if the channel satisfies the strong converse property, then the same statement can be made with davg, the average distortion level, replacing dmax. Expand
Slepian-Wolf coding over broadcast channels
  • E. Tuncel
  • Computer Science
  • IEEE Transactions on Information Theory
  • 2006
TLDR
It is shown with an example that an optimal joint source-channel coding strategy is strictly advantageous over the combination of stand-alone source and channel codes, and thus "informational separation" does not hold. Expand
Multiterminal Source–Channel Communication Over an Orthogonal Multiple-Access Channel
  • J. Xiao, Z. Luo
  • Mathematics, Computer Science
  • IEEE Transactions on Information Theory
  • 2007
TLDR
This work provides a characterization of the optimal tradeoff between the transmission cost Gamma and the distortion vector D as measured against individual sources, and determines the optimal power and distortion tradeoff in a quadratic Gaussian sensor network under orthogonal multiple access. Expand
Source-Channel Coding and Separation for Generalized Communication Systems
We consider transmission of stationary and ergodic sources over non-ergodic composite channels with channel state information at the receiver (CSIR). Previously we introduced alternate capacityExpand
On Zero-Delay Source-Channel Coding
TLDR
It is shown that the Gaussian source-channel pair is unique in the sense that it is the only source- channel pair for which the optimal mappings are linear at more than one CSNR values. Expand
Quantifying Performance Losses in Source-Channel Coding
In this paper, we identify and quantify loss factors causing sub-optimal performance in joint source-channel coding. We show that both the loss due to non-Gaussian distributed channel symbols and theExpand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 20 REFERENCES
On joint source-channel coding for the Wyner-Ziv source and the Gel'fand-Pinsker channel
TLDR
A separation theorem is proved that there is no loss in asymptotic optimality in applying an optimal Wyner-Ziv source code and an optimal Gel'fand-Pinsker channel code to a lossy joint source-channel coding system. Expand
Theoretical limitations on the transmission of data from analog sources
  • T. J. Goblick
  • Mathematics, Computer Science
  • IEEE Trans. Inf. Theory
  • 1965
TLDR
The minimum mean-squared error is calculated for cases in which the analog source and additive channel noise are stationary, Gaussian processes, and the performance of amplitude and angle modulation systems is compared to the theoretically ideal performance obtainable in some of these cases. Expand
On source/channel codes of finite block length
For certain fortunate choices of source/channel pairs, all sophisticated coding is in vain: for them, a code of block length one is sufficient to achieve optimal performance. Is the set of "fortunateExpand
The source-channel separation theorem revisited
The single-user separation theorem of joint source-channel coding has been proved previously for wide classes of sources and channels. We find an information-stable source/channel pair which does notExpand
On the capacity of large Gaussian relay networks
TLDR
This paper provides a new example where a simple cut-set upper bound is achievable, and one more example where uncoded transmission achieves optimal performance in a network joint source-channel coding problem. Expand
Duality between source coding and channel coding and its extension to the side information case
TLDR
This work begins with a mathematical characterization of the functional duality between classical source and channel coding, formulating the precise conditions under which the optimal encoder for one problem is functionally identical to the optimal decoder for the other problem. Expand
On the capacity of wireless networks: the relay case
  • M. Gastpar, M. Vetterli
  • Computer Science
  • Proceedings.Twenty-First Annual Joint Conference of the IEEE Computer and Communications Societies
  • 2002
TLDR
It is shown that lower and upper bounds meet asymptotically as the number of nodes in the network goes to infinity, thus proving that the capacity of the wireless network with n nodes under the relay traffic pattern behaves like log n bits per second. Expand
Successive refinement of information: characterization of the achievable rates
  • B. Rimoldi
  • Mathematics, Computer Science
  • IEEE Trans. Inf. Theory
  • 1994
TLDR
A model to describe a source with distortion no larger than /spl Delta//sub 1 and a more accurate description at distortion at R/sub 1. Expand
Successive refinement of information
TLDR
It is shown that in order to achieve optimal successive refinement the necessary and sufficient conditions are that the solutions of the rate distortion problem can be written as a Markov chain and all finite alphabet signals with Hamming distortion satisfy these requirements. Expand
Beyond the separation principle: A broader approach to source-channel coding
Note: Invited Paper Reference LCM-CONF-2002-002 Record created on 2006-11-29, modified on 2017-05-12
...
1
2
...