Source Coding for Quasiarithmetic Penalties

@article{Baer2006SourceCF,
  title={Source Coding for Quasiarithmetic Penalties},
  author={Michael B. Baer},
  journal={IEEE Transactions on Information Theory},
  year={2006},
  volume={52},
  pages={4380-4393}
}
  • M. Baer
  • Published 18 August 2005
  • Computer Science
  • IEEE Transactions on Information Theory
Whereas Huffman coding finds a prefix code minimizing mean codeword length for a given finite-item probability distribution, quasiarithmetic or quasilinear coding problems have the goal of minimizing a generalized mean of the form rho-1(Sigmaipirho(li )), where li denotes the length of the ith codeword, p i denotes the corresponding probability, and rho is a monotonically increasing cost function. Such problems, proposed by Campbell, have a number of diverse applications. Several cost functions… 

Figures and Tables from this paper

Reserved-length prefix coding
  • M. Baer
  • Computer Science
    2008 IEEE International Symposium on Information Theory
  • 2008
TLDR
This paper introduces a polynomial-time dynamic programming algorithm that finds optimal codes for this reserved-length prefix coding problem and has applications to quickly encoding and decoding lossless codes.
Infinite-Alphabet Prefix Codes Optimal for ß-Exponential Penalties
  • M. Baer
  • Computer Science
    2007 IEEE International Symposium on Information Theory
  • 2007
TLDR
Methods for finding codes optimal for beta-exponential means are introduced, one of which applies to geometric distributions, while another applies to distributions with lighter tails, and both are extended to minimizing maximum pointwise redundancy.
Infinite-Alphabet Prefix Codes Optimal for beta-Exponential Penalties
TLDR
Methods for finding codes optimal for exponential means of Poisson distributions are introduced, one of which applies to geometric distributions, while another applies to distributions with lighter tails.
D-ary Bounded-Length Huffman Coding
  • M. Baer
  • Computer Science
    2007 IEEE International Symposium on Information Theory
  • 2007
TLDR
The Package-Merge approach is generalized without increasing complexity in order to introduce a minimum codeword length, lmin, to allow for objective functions other than the minimization of expected codewORD length, and to be applicable to both binary and nonbinary codes; non binary codes were previously addressed using a slower dynamic programming approach.
Generalizations of Length Limited Huffman Coding for Hierarchical Memory Settings
TLDR
This paper studies the problem of designing prefix-free encoding schemes having minimum average code length that can be decoded efficiently under a decode cost model that captures memory hierarchy induced cost functions and presents an algorithm to solve this problem in time O(nD).
Twenty (or so) Questions: $D$-ary Length-Bounded Prefix Coding
TLDR
These algorithms are generalized without increasing complexity in order to introduce a minimum codeword length constraint, to allow for objective functions other than the minimization of expected codewords length, and to be applicable to both binary and nonbinary codes; non binary codes were previously addressed using a slower dynamic programming approach.
Alphabetic coding with exponential costs
  • M. Baer
  • Computer Science
    Inf. Process. Lett.
  • 2010
Lossless quantum data compression with exponential penalization: an operational interpretation of the quantum Rényi entropy
TLDR
It is shown that by invoking an exponential average length, related to an exponential penalization over large codewords, the quantum Rényi entropies arise as the natural quantities relating the optimal encoding schemes with the source description, playing an analogous role to that of von Neumann entropy.
On how generalised entropies without parameters impact information optimisation processes
TLDR
It is found that processes such as data compression and channel capacity maximisation can be improved in regions where there is a low density of states, whereas for high densities the results coincide with the Shannon’s formulation.
Timely Lossless Source Coding for Randomly Arriving Symbols
We consider a real-time streaming source coding system in which an encoder observes a sequence of randomly arriving symbols from an i.i.d. source, and feeds binary code-words to a FIFO buffer that
...
...

References

SHOWING 1-10 OF 81 REFERENCES
The Rényi redundancy of generalized Huffman codes
TLDR
By decreasing some of the codeword lengths in a Shannon code, the upper bound on the redundancy given in the standard proof of the noiseless source coding theorem is improved and the lower bound is improved by randomizing between codewords, allowing linear programming techniques to be used on an integer programming problem.
Maximal codeword lengths in Huffman codes
Practical Length-limited Coding for Large Alphabets
TLDR
This work re-examine the package-merge algorithm for generating minimum-cost length-limited prefix-free codes and shows that with a considered reorganization of the key steps it is possible for it to run quickly in significantly less memory than was required by previous implementations, while retaining asymptotic efficiency.
Existence of optimal prefix codes for infinite source alphabets
It is proven that for every random variable with a countably infinite set of outcomes and finite entropy there exists an optimal prefix code which can be constructed from Huffman codes for truncated
Huffman codes and self-information
In this paper the connection between the self-information of a source letter from a finite alphabet and its code-word length in a Huffman code is investigated. Consider the set of all independent
On the redundancy of optimal codes with limited word length
TLDR
It is shown that the redundancy of these constrained codes is very close to that of the unconstrained Huffman codes when the number of codewords N is such that ND/sup 1-L/ becomes negligible.
Renyi to Renyi - Source Coding under Siege
  • M. Baer
  • Computer Science
    2006 IEEE International Symposium on Information Theory
  • 2006
A novel lossless source coding paradigm applies to problems of unreliable lossless channels with low bit rates, in which a vital message needs to be transmitted prior to termination of
A Fast Algorithm for Optimum Height-Limited Alphabetic Binary Trees
TLDR
This algorithm is an alphabetic version of the Package Merge algorithm, and yields an O(nL\log n)-time algorithm for theAlphabetic Huffman coding problem, and appears hard to prove correct.
Minimal Huffman trees
TLDR
It is proved that several natural variants of Huffman's algorithm, that appear to be nondeterministic, in fact all lead to the single Huffman tree obtained by Schwartz's algorithm.
Two inequalities implied by unique decipherability
TLDR
A consequence of (1) and work of Shannon is that this more restricted kind of list suffices in the search for codes with specified amounts of redundancy.
...
...