• Corpus ID: 235125543

ReduNet: A White-box Deep Network from the Principle of Maximizing Rate Reduction

@article{Chan2021ReduNetAW,
  title={ReduNet: A White-box Deep Network from the Principle of Maximizing Rate Reduction},
  author={Kwan Ho Ryan Chan and Yaodong Yu and Chong You and Haozhi Qi and John Wright and Yi Ma},
  journal={ArXiv},
  year={2021},
  volume={abs/2105.10446}
}
This work attempts to provide a plausible theoretical framework that aims to interpret modern deep (convolutional) networks from the principles of data compression and discriminative representation. We argue that for high-dimensional multi-class data, the optimal linear discriminative representation maximizes the coding rate difference between the whole dataset and the average of all the subsets. We show that the basic iterative gradient ascent scheme for optimizing the rate reduction objective… 
On the Principles of Parsimony and Self-Consistency for the Emergence of Intelligence
TLDR
A theoretical framework that sheds light on understanding deep networks within a bigger picture of Intelligence in general and introduces two fundamental principles, Parsimony and Self-consistency, that are believed to be cornerstones for the emergence of Intelligence, artificial or natural.
CTRL: Closed-Loop Transcription to an LDR via Minimaxing Rate Reduction
TLDR
This work argues that the optimal encoding and decoding mappings sought can be formulated as a two-player minimax game between the encoder and decoder for the learned representation, and draws inspiration from closed-loop error feedback from control systems.
MAE-DET: Revisiting Maximum Entropy Principle in Zero-Shot NAS for Efficient Object Detection
TLDR
The proposed method, named MAE-DET, automatically designs efficient detection backbones via the Maximum Entropy Principle without training network parameters, reducing the architecture design cost to nearly zero yet delivering the state-of-the-art (SOTA) performance.
Bridging Model-based Safety and Model-free Reinforcement Learning through System Identification of Low Dimensional Linear Models
TLDR
This paper proposes a new method to combine model-based safety with model-free reinforcement learning by explicitly developing a low-dimensional model of the system controlled by a RL policy and applying stability and safety guarantees on that simple model.
Robust Training under Label Noise by Over-parameterization
TLDR
This work proposes a principled approach for robust training of over-parameterized deep networks in classification tasks where a proportion of training labels are corrupted, and demonstrates state-of-the-art test accuracy against label noise on a variety of real datasets.
Incremental Learning of Structured Memory via Closed-Loop Transcription
TLDR
Experimental results show that the proposed minimal computational model for learning a structured memory of multiple object classes in an incremental setting can effectively alleviate catastrophic forgetting, achieving significantly better performance than prior work for both generative and discriminating purposes.
Discovering Invariant Rationales for Graph Neural Networks
TLDR
This work proposes a new strategy of discovering invariant rationale (DIR) to construct intrinsically interpretable GNNs and proves the superiority of the DIR in terms of interpretability and generalization ability on graph classification over the leading baselines.
Evaluation and Comparison of Deep Learning Methods for Pavement Crack Identification with Visual Images
  • Kai Lu
  • Computer Science
    ArXiv
  • 2021
TLDR
A weakly supervised learning framework of combined TL-SSGAN and its performance enhancement measures are proposed, which can maintain comparable crack identification performance with the supervised learning, while greatly reducing the number of labeled samples needed.
Neurashed: A Phenomenological Model for Imitating Deep Learning Training
TLDR
It is argued that a future deep learning theory should inherit three characteristics: a hierarchically structured network architecture, parameters iteratively optimized using stochastic gradient-based methods, and information from the data that evolves compressively.
Adaptive Projected Residual Networks for Learning Parametric Maps from Sparse Data
TLDR
A universal approximation property of the proposed adaptive projected ResNet framework is proved, which motivates a related iterative algorithm for the ResNet construction.
...
...