Benjamin G. Kelly

Learn More
We study error exponents for source coding with side information. Both achievable exponents and converse bounds are obtained for the following two cases: lossless source coding with coded information and lossy source coding with full side information (Wyner-Ziv). These results recover and extend several existing results on source-coding error exponents and(More)
We describe a scheme for rate-distortion with distributed encoding in which the sources to be compressed contain a common component. We show that this scheme is optimal in some situations and that it strictly improves upon existing schemes, which do not make full use of common components. This establishes that independent quantization followed by(More)
We provide a novel upper-bound on Witsenhausen's rate, the rate required in the zero-error analogue of the Slepian-Wolf problem. Our bound is given in terms of a new information-theoretic functional defined on a certain graph and is derived by upper bounding complementary graph entropy. We use the functional, along with graph entropy, to give a single(More)
Given training sequences generated by two distinct, but unknown distributions sharing a common alphabet, we seek a classifier that can correctly decide whether a third test sequence is generated by the first or second distribution using only the training data. To model ‘limited learning’ we allow the alphabet size to grow and therefore(More)
We consider the quadratic Gaussian Wyner-Ziv problem, i.e., the problem of lossy compression of a Gaussian source with Gaussian side information at the decoder and a quadratic distortion measure. Motivated by applications in video coding, we study how to minimize the probability that the distortion exceeds a given threshold, as opposed to the conventional(More)
Given training sequences generated by two distinct, but unknown, distributions on a common alphabet, we study the problem of determining whether a third sequence was generated according to the first or second distribution. To model sources such as natural language, for which the underlying distributions are difficult to learn from realistic amounts of data,(More)
We study universal compression of independent and identically distributed sources over large alphabets using fixed-rate codes. To model large alphabets, we use sequences of discrete alphabets that increase in size with the blocklength. We show that universal compression is possible using deterministic codes provided that the alphabet growth is sub-linear in(More)
We provide new achievable error exponents for the problem of source coding with full side information at the decoder. In some instances our exponent strictly improves upon the previous applicable results of Csiszár; Oohama and Han; and the “expurgated” exponent of Csiszár and Körner. Our improvement follows from studying(More)
  • 1