Corpus ID: 235125543

ReduNet: A White-box Deep Network from the Principle of Maximizing Rate Reduction

@article{Chan2021ReduNetAW,
  title={ReduNet: A White-box Deep Network from the Principle of Maximizing Rate Reduction},
  author={Kwan Ho Ryan Chan and Yaodong Yu and Chong You and Haozhi Qi and John Wright and Yi Ma},
  journal={ArXiv},
  year={2021},
  volume={abs/2105.10446}
}
This work attempts to provide a plausible theoretical framework that aims to interpret modern deep (convolutional) networks from the principles of data compression and discriminative representation. We argue that for high-dimensional multi-class data, the optimal linear discriminative representation maximizes the coding rate difference between the whole dataset and the average of all the subsets. We show that the basic iterative gradient ascent scheme for optimizing the rate reduction objective… Expand
How Powerful is Graph Convolution for Recommendation?
  • Yifei Shen, Yongji Wu, +4 authors Dongsheng Li
  • Computer Science, Engineering
  • ArXiv
  • 2021
Graph convolutional networks (GCNs) have recently enabled a popular class of algorithms for collaborative filtering (CF). Nevertheless, the theoretical underpinnings of their empirical successesExpand
Learning Structures for Deep Neural Networks
TLDR
This paper proposes to adopt the efficient coding principle and shows that sparse coding can effectively maximize the entropy of the output signals, and designs an algorithm based on global group sparse coding to automatically learn the inter-layer connection and determine the depth of a neural network. Expand
Panoramic Learning with A Standardized Machine Learning Formalism
Machine Learning (ML) is about computational methods that enable machines to learn concepts from experiences. In handling a wide variety of experiences ranging from data instances, knowledge,Expand