Semantic Scholar uses AI to extract papers important to this topic.
Top-performing deep architectures are trained on massive amounts of labeled data. In the absence of labeled data for a certain… Expand We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised class of deep, directed… Expand The conventional wisdom is that backprop nets with excess hidden units generalize poorly. We show that nets with excess capacity… Expand A learning algorithm for multilayer feedforward networks, RPROP (resilient propagation), is proposed. To overcome the inherent… Expand A quantitative and practical Bayesian framework is described for learning of mappings in feedforward networks. The framework… Expand The authors propose a theoretical framework for backpropagation (BP) in order to identify some of its limitations as a general… Expand Fundamental developments in feedforward artificial neural networks from the past thirty years are reviewed. The history… Expand The ability of learning networks to generalize can be greatly enhanced by providing constraints from the task domain. This paper… Expand The author presents a survey of the basic theory of the backpropagation neural network architecture covering architectural design… Expand The author presents a survey of the basic theory of the backpropagation neural network architecture covering architectural design… Expand