Minimax Learning for Remote Prediction

@article{Li2018MinimaxLF,
  title={Minimax Learning for Remote Prediction},
  author={Cheuk Ting Li and Xiugang Wu and Ayfer {\"O}zg{\"u}r and Abbas El Gamal},
  journal={2018 IEEE International Symposium on Information Theory (ISIT)},
  year={2018},
  pages={541-545}
}
  • Cheuk Ting Li, Xiugang Wu, +1 author A. Gamal
  • Published 31 May 2018
  • Computer Science, Mathematics
  • 2018 IEEE International Symposium on Information Theory (ISIT)
The classical problem of supervised learning is to infer an accurate predictor of a target variable $Y$ from a measured variable $X$ by using a finite number of labeled training samples. Motivated by the increasingly distributed nature of data and decision making, in this paper we consider a variation of this classical problem in which the prediction is performed remotely based on a rate-constrained description $M$ of X. Upon receiving M, the remote node computes an estimate Y of Y. We follow… Expand
Minimax Learning for Distributed Inference
TLDR
The recent minimax learning approach is followed to study this inference problem and it is shown that it corresponds to a one-shot minimax noisy lossy source coding problem, leading to a general method for designing a near-optimal descriptor-estimator pair. Expand
Vector Gaussian CEO Problem Under Logarithmic Loss and Applications
TLDR
This paper finds an explicit characterization of the rate-distortion region of the vector Gaussian CEO problem under logarithmic loss distortion measure and develops Blahut-Arimoto type algorithms that allow to compute numerically the regions provided in this paper, for both discrete and Gaussian models. Expand
A Unified Framework for One-Shot Achievability via the Poisson Matching Lemma
TLDR
This paper studies fixed-length settings, and is not limited to source coding, showing that the Poisson functional representation is a viable alternative to typicality for most problems in network information theory. Expand
A Unified Framework for One-shot Achievability via the Poisson Matching Lemma
We introduce the Poisson matching lemma and apply it to prove one-shot achievability results for channels with state information at the encoder, lossy source coding with side information at theExpand
An information-theoretic approach to distributed learning : distributed source coding under logarithmic loss
Une question de fond, souvent discutee dans l’apprentissage de la theorie, est de savoir comment choisir une fonction de `bonne' perte qui mesure la fidelite de la reconstruction a l’originale. LaExpand
Efficient Approximate Minimum Entropy Coupling of Multiple Probability Distributions
  • Cheuk Ting Li
  • Computer Science, Mathematics
  • IEEE Transactions on Information Theory
  • 2021
TLDR
An efficient algorithm is presented for computing a coupling with entropy within 2 bits from the entropy of the greatest lower bound of <inline-formula> <tex-math notation="LaTeX">$p_{1},\ldots,p_{m}$ </tex- Math> with respect to majorization. Expand

References

SHOWING 1-10 OF 18 REFERENCES
A Minimax Approach to Supervised Learning
TLDR
The maximum entropy machine minimizes the worst-case 0-1 loss over the structured set of distribution, and by the numerical experiments can outperform other well-known linear classifiers such as SVM. Expand
The information bottleneck method
TLDR
The variational principle provides a surprisingly rich framework for discussing a variety of problems in signal processing and learning, as will be described in detail elsewhere. Expand
Minimax Statistical Learning with Wasserstein distances
TLDR
A minimax framework for statistical learning with ambiguity sets given by balls in Wasserstein space is described and generalization bounds that involve the covering number properties of the original ERM problem are proved. Expand
Minimax Statistical Learning and Domain Adaptation with Wasserstein Distances
TLDR
A minimax framework for statistical learning with ambiguity sets given by balls in Wasserstein space is described and it is proved that a generalization bound that involves the covering number properties of the original ERM problem is proved. Expand
Variance-based Regularization with Convex Objectives
TLDR
An approach to risk minimization and stochastic optimization that provides a convex surrogate for variance, allowing near-optimal and computationally efficient trading between approximation and estimation error, and it is shown that this procedure comes with certificates of optimality. Expand
Strong functional representation lemma and applications to coding theorems
  • Cheuk Ting Li, A. Gamal
  • Mathematics, Computer Science
  • 2017 IEEE International Symposium on Information Theory (ISIT)
  • 2017
TLDR
It is shown that for any random variables X and Y, it is possible to represent Y as a function of (X, Z) such that Z is independent of X and log(I(X; Y)+1)+4, which establishes a tighter bound on the rate needed for one-shot exact channel simulation. Expand
The minimax distortion redundancy in noisy source coding
TLDR
A coding theorem is proved and a necessary and sufficient condition for the existence of a coding scheme which is universally optimal for all members of /spl Lambda/ and characterizes the approximation-estimation tradeoff for statistical modeling of noisy source coding problems. Expand
A general minimax result for relative entropy
  • D. Haussler
  • Mathematics, Computer Science
  • IEEE Trans. Inf. Theory
  • 1997
TLDR
It is shown that the minimax and maximin values of this game are always equal, and there is always a minimax strategy in the closure of the set of all Bayes strategies. Expand
On general minimax theorems
There have been several generalizations of this theorem. J. Ville [9], A. Wald [11], and others [1] variously extended von Neumann's result to cases where M and N were allowed to be subsets ofExpand
The Information Bottleneck Revisited or How to Choose a Good Distortion Measure
TLDR
It is shown that the information bottleneck method has some properties that are not shared with rate distortion theory based on any other divergence measure, which makes it unique. Expand
...
1
2
...