Pursuit of a Discriminative Representation for Multiple Subspaces via Sequential Games

@article{Pai2022PursuitOA,
  title={Pursuit of a Discriminative Representation for Multiple Subspaces via Sequential Games},
  author={Druv Pai and Michael Psenka and Chih-Yuan Chiu and Manxi Wu and E. Dobriban and Y. Ma},
  journal={ArXiv},
  year={2022},
  volume={abs/2206.09120}
}
We consider the problem of learning discriminative representations for data in a high-dimensional space with distribution supported on or around multiple low-dimensional linear subspaces. That is, we wish to compute a linear injective map of the data such that the features lie on multiple orthogonal subspaces. Instead of treating this learning problem using multiple PCAs, we cast it as a sequential game using the closed-loop transcription (CTRL) framework recently proposed for learning… 

On the principles of Parsimony and Self-consistency for the emergence of intelligence

A theoretical framework is proposed that sheds light on understanding deep networks within a bigger picture of intelligence in general and introduces two fundamental principles, Parsimony and Self-consistency, which address two fundamental questions regarding intelligence: what to learn and how to learn, respectively.

References

SHOWING 1-10 OF 61 REFERENCES

Learning Diverse and Discriminative Representations via the Principle of Maximal Coding Rate Reduction

Empirically, the representations learned using this principle alone are significantly more robust to label corruptions in classification than those using cross-entropy, and can lead to state-of-the-art results in clustering mixed data from self-learned invariant features.

What is Local Optimality in Nonconvex-Nonconcave Minimax Optimization?

A proper mathematical definition of local optimality for this sequential setting---local minimax is proposed, as well as its properties and existence results are presented.

Deconstructing Generative Adversarial Networks

A novel GAN architecture is proposed, termed as Cascade GANs, to provably recover meaningful low-dimensional generator approximations when the real distribution is high-dimensional and corrupted by outliers and a fundamental trade-off in the approximation error and statistical error is demonstrated.

An Introduction to Image Synthesis with Generative Adversarial Nets

A taxonomy of methods used in image synthesis is provided, different models for text-to-image synthesis and image- to-image translation are reviewed, and some evaluation metrics are discussed as well as possible future research directions inimage synthesis with GAN.

InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets

Experiments show that InfoGAN learns interpretable representations that are competitive with representations learned by existing fully supervised methods.

Segmentation of Multivariate Mixed Data via Lossy Data Coding and Compression

It is shown that a deterministic segmentation is approximately the (asymptotically) optimal solution for compressing mixed data and can be readily applied to segment real imagery and bioinformatic data.

Wasserstein GAN, 2017

  • URL https://arxiv. org/abs/1701.07875
  • 2017

CTRL: Closed-Loop Transcription to an LDR via Minimaxing Rate Reduction

This work argues that the optimal encoding and decoding mappings sought can be formulated as a two-player minimax game between the encoder and decoder for the learned representation, and draws inspiration from closed-loop error feedback from control systems.

PyTorch: An Imperative Style, High-Performance Deep Learning Library

This paper details the principles that drove the implementation of PyTorch and how they are reflected in its architecture, and explains how the careful and pragmatic implementation of the key components of its runtime enables them to work together to achieve compelling performance.

Learning Structured Output Representation using Deep Conditional Generative Models

A deep conditional generative model for structured output prediction using Gaussian latent variables is developed, trained efficiently in the framework of stochastic gradient variational Bayes, and allows for fast prediction using Stochastic feed-forward inference.
...