Efficient mapping of backpropagation algorithm onto a network of workstations
@article{Sudhakar1998EfficientMO,
title={Efficient mapping of backpropagation algorithm onto a network of workstations},
author={V. Sudhakar and Chebiyyam Sivaram Murthy},
journal={IEEE transactions on systems, man, and cybernetics. Part B, Cybernetics : a publication of the IEEE Systems, Man, and Cybernetics Society},
year={1998},
volume={28 6},
pages={
841-8
}
}In this paper, we present an efficient technique for mapping a backpropagation (BP) learning algorithm for multilayered neural networks onto a network of workstations (NOW's. [] Key Method We present a fully distributed version of the BP algorithm and also its speedup analysis. We compare the performance of our algorithm with a recent work involving the vertical partitioning approach for mapping the BP algorithm onto a distributed memory multiprocessor. Our results on SUN 3/50 NOW's show that we are able to…
Figures, Tables, and Topics from this paper
18 Citations
Parallel implementation of back-propagation algorithm in networks of workstations
- Computer ScienceIEEE Transactions on Parallel and Distributed Systems
- 2005
The analytical and experimental performance shows that the proposed parallel algorithm has better speed-up, less communication time, and better space reduction factor than the earlier algorithm.
A scalable parallel algorithm for training a hierarchical mixture of neural experts
- Computer ScienceParallel Comput.
- 2002
On the Performance of Parallel Neural Network Implementations on Distributed Memory Architectures
- Computer Science2008 Eighth IEEE International Symposium on Cluster Computing and the Grid (CCGRID)
- 2008
The impact of multi processor memory systems in particular, the distributed memory (DM) and virtual shared memory (VSM), on the implementation of parallel backpropagation neural network algorithms, and how to allow the parallel neural network to choose the optimum number of processors dynamically is studied.
Parallel implementation of multilayered neural networks based on Map-Reduce on cloud computing clusters
- Computer ScienceSoft Comput.
- 2016
Experimental results demonstrate that the proposed parallel BP algorithm in this paper has better speedup, faster convergence rate, less iterations than that of the existed algorithms.
On the Performance of Parallel Backpropagation Neural Network Implementations Using CUDA
- Computer Science
- 2017
A comparison between the running times taken on the GPU and on the conventional CPU to perform the training of a back-propagation neural network and the results confirm the speed-up advantages by tapping on the resources of GPU.
A Theoretical Framework for Parallel Implementation of Deep Higher Order Neural Networks
- Computer Science
- 2016
A new partitioning approach for mapping HONNs to individual computers within a master-slave distributed system (a local area network) and a new learning algorithm is developed so that it can be used for HONN learning in a distributed system environment.
Computational Grid vs. Parallel Computer for Coarse-Grain Parallelization of Neural Networks Training
- Computer ScienceOTM Workshops
- 2005
Development of a coarse-grain parallel algorithm of artificial neural networks training with dynamic mapping onto processors of parallel computer system is considered and experiments show better efficiency for computational grid instead of parallelComputer with an efficiency/price criterion.
TRAINING SET PARALLELISM IN PAHRA ARCHITECTURE
- Computer Science
- 2007
The Parallel Hybrid Ring Architecture (PAHRA), which is described in this article, provides flexible platform for simulation of multilayered feed-forward neural networks trained with back-propagation algorithm.
Parallel simulation of neural networks on SpiNNaker universal neuromorphic hardware
- Computer Science
- 2010
The research investigates how to model large-scale neural networks efficiently on such a parallel machine and shows the feasibility of the approach as well as the performance of SpiNNaker as a general-purpose platform for the simulation of neural networks.
Modeling of feedforward neural network in PAHRA architecture
- Computer Science
- 2009
The parallel architecture described in this article provides flexible platform for simulation of multilayered feedforward neural networks trained with back-propagation algorithm and provides mathematical tool for verification of system performance.
References
SHOWING 1-10 OF 53 REFERENCES
Parallel simulation of multilayered neural networks on distributed-memory multiprocessors
- Computer Science
- 1990
The backpropagation algorithm on grid and hypercube architectures
- Computer ScienceParallel Comput.
- 1990
Multilayer Neural Networks on Distributed-Memory Multiprocessors
- Computer Science
- 1990
The p-processor speed-up of the backpropagation algorithm over a single processor is analyzed theoretically for some popular processor interconnection topologies, which can be used as a basis in determining the most cost-effective or optimal number of processors.
A Scalable Parallel Formulation of the Backpropagation Algorithm for Hypercubes and Related Architectures
- Computer ScienceIEEE Trans. Parallel Distributed Syst.
- 1994
A new technique for mapping the backpropagation algorithm on hypercube and related architectures using a network partitioning scheme called checkerboarding, which can be combined with the pattern partitioning technique to form a hybrid scheme that performs better than either one of these schemes.
Neural network simulation on a reduced-mesh-of-trees organization
- Computer ScienceOther Conferences
- 1990
This work shows how to simulate ANN's on an SIMD architecture, the Reduced Mesh of Trees (RMOT), which has p PE's and n2 memory arranged in a p x p array of modules (p is a constant less than or equal to n).
EFFICIENT MAPPING OF NEURAL NETWORKS ON MULTICOMPUTERS
- Computer Science
- 2000
In this paper, an efficient mapping of multilayer artificial neural networks on multicomputers is formulated and analyzed and a simplified algorithm with negligible error is developed and analyzed.
Mapping Neural Networks onto Message-Passing Multicomputers
- Computer ScienceJ. Parallel Distributed Comput.
- 1989
Network Learning on the Connection Machine
- Computer ScienceIJCAI
- 1987
The first implementation of a connectionist learning algorithm, error back-propagation, on a fine-grained parallel computer, the Connection Machine, is discussed, finding the major impediment to further speed-up to be the communications between processors, and not processor speed per se.
Implementing Neural Network Models on Parallel Computers
- Computer ScienceComput. J.
- 1987
This work reviews the implementation of a range of neural network models on SIMD and MIMD computers, and describes the strategies which have been used to implement the Durbin and Willshaw elastic net model on the Computing Surface.
Implementation of Multilayer Neural Networks on Parallel Programmable Digital Computers
- Computer Science
- 1991
A method of implementing neural networks on parallel, programmable computers, which can effectively address the computational requirements of such signal processing applications, and is applicable to multilayer connectionist networks and two-dimensional, SIMD processor arrays.








