• Corpus ID: 165163931

Doctor of Crosswise: Reducing Over-parametrization in Neural Networks

@article{Curt2019DoctorOC,
  title={Doctor of Crosswise: Reducing Over-parametrization in Neural Networks},
  author={Joachim de Curt{\'o} and Irene C. Zarza},
  journal={ArXiv},
  year={2019},
  volume={abs/1905.10324}
}
Dr. of Crosswise proposes a new architecture to reduce over-parametrization in Neural Networks. It introduces an operand for rapid computation in the framework of Deep Learning that leverages learned weights. The formalism is described in detail providing both an accurate elucidation of the mechanics and the theoretical implications. 

Figures from this paper

References

SHOWING 1-10 OF 38 REFERENCES

Why and when can deep-but not shallow-networks avoid the curse of dimensionality: A review

An emerging body of theoretical results on deep learning including the conditions under which it can be exponentially better than shallow learning are reviewed, together with new results, open problems and conjectures.

Progressive Growing of GANs for Improved Quality, Stability, and Variation

A new training methodology for generative adversarial networks is described, starting from a low resolution, and adding new layers that model increasingly fine details as training progresses, allowing for images of unprecedented quality.

Training Deep Networks with Synthetic Data: Bridging the Reality Gap by Domain Randomization

This work presents a system for training deep neural networks for object detection using synthetic images that relies upon the technique of domain randomization, in which the parameters of the simulator are randomized in non-realistic ways to force the neural network to learn the essential features of the object of interest.

Improved Techniques for Training GANs

This work focuses on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic, and presents ImageNet samples with unprecedented resolution and shows that the methods enable the model to learn recognizable features of ImageNet classes.

AdaNet: Adaptive Structural Learning of Artificial Neural Networks

The results demonstrate that the AdaNet algorithm can automatically learn network structures with very competitive performance accuracies when compared with those achieved for neural networks found by standard approaches.

A la Carte - Learning Fast Kernels

This work introduces a family of fast, flexible, lightly parametrized and general purpose kernel learning methods, derived from Fastfood basis function expansions, and provides mechanisms to learn the properties of groups of spectral frequencies in these expansions.

High-Resolution Deep Convolutional Generative Adversarial Networks

A new layered network, HDCGAN, that incorporates current state-of-the-art techniques for network convergence of DCGAN (Deep Convolutional Generative Adversarial Networks) and achieves good-looking high-resolution results is proposed.

McKernel: Approximate Kernel Expansions in Log-linear Time through Randomization

McKernel establishes the foundation of a new architecture of learning that allows to obtain large-scale non-linear classification combining lightning kernel expansions and a linear classifier.

When and Why Are Deep Networks Better Than Shallow Ones?

This theorem proves an old conjecture by Bengio on the role of depth in networks, characterizing precisely the conditions under which it holds, and suggests possible answers to the the puzzle of why high-dimensional deep networks trained on large training sets often do not seem to show overfit.

AutoAugment: Learning Augmentation Policies from Data

This paper describes a simple procedure called AutoAugment to automatically search for improved data augmentation policies, which achieves state-of-the-art accuracy on CIFAR-10, CIFar-100, SVHN, and ImageNet (without additional data).