• Corpus ID: 35334175

Multilayer Perceptron Algebra

  title={Multilayer Perceptron Algebra},
  author={Zhao Peng},
  • Zhao Peng
  • Published 18 January 2017
  • Computer Science
  • ArXiv
Artificial Neural Networks(ANN) has been phenomenally successful on various pattern recognition tasks. However, the design of neural networks rely heavily on the experience and intuitions of individual developers. In this article, the author introduces a mathematical structure called MLP algebra on the set of all Multilayer Perceptron Neural Networks(MLP), which can serve as a guiding principle to build MLPs accommodating to the particular data sets, and to build complex MLPs from simpler ones. 

Deep learning in electron microscopy

This review paper offers a practical perspective aimed at developers with limited familiarity of deep learning in electron microscopy that discusses hardware and software needed to get started with deep learning and interface with electron microscopes.

Convolutional Neural Network for Centrality Determination in Fixed Target Experiments

Fixed target experiments have an unique possibility to measure centrality of colliding systems by hadronic calorimeters on the beam line. This is usually achieved by the detection of all forward



Neural Networks and Deep Learning

  • C. Aggarwal
  • Computer Science
    Springer International Publishing
  • 2018
This chapter explains a fast algorithm for computing such gradients, an algorithm known as backpropagation, which is based on the model known as reinforcement learning.

Big Neural Networks Waste Capacity

The experiments on ImageNet LSVRC-2010 show that this may be due to the fact there are highly diminishing returns for capacity in terms of training error, leading to underfitting, suggesting that the optimization method - first order gradient descent - fails at this regime.

Approximation by superpositions of a sigmoidal function

  • G. Cybenko
  • Computer Science
    Math. Control. Signals Syst.
  • 1989
In this paper we demonstrate that finite linear combinations of compositions of a fixed, univariate function and a set of affine functionals can uniformly approximate any continuous function ofn real

Deep Learning

Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.

Do Deep Nets Really Need to be Deep?

This paper empirically demonstrate that shallow feed-forward nets can learn the complex functions previously learned by deep nets and achieve accuracies previously only achievable with deep models.

Do Deep Convolutional Nets Really Need to be Deep and Convolutional?

Yes, they do. This paper provides the first empirical demonstration that deep convolutional models really need to be both deep and convolutional, even when trained with methods such as distillation