Efficient mapping of backpropagation algorithm onto a network of workstations

@article{Sudhakar1998EfficientMO,
  title={Efficient mapping of backpropagation algorithm onto a network of workstations},
  author={V. Sudhakar and Chebiyyam Sivaram Murthy},
  journal={IEEE transactions on systems, man, and cybernetics. Part B, Cybernetics : a publication of the IEEE Systems, Man, and Cybernetics Society},
  year={1998},
  volume={28 6},
  pages={
          841-8
        }
}
  • V. Sudhakar, C. Murthy
  • Published 1 December 1998
  • Computer Science
  • IEEE transactions on systems, man, and cybernetics. Part B, Cybernetics : a publication of the IEEE Systems, Man, and Cybernetics Society
In this paper, we present an efficient technique for mapping a backpropagation (BP) learning algorithm for multilayered neural networks onto a network of workstations (NOW's. [] Key Method We present a fully distributed version of the BP algorithm and also its speedup analysis. We compare the performance of our algorithm with a recent work involving the vertical partitioning approach for mapping the BP algorithm onto a distributed memory multiprocessor. Our results on SUN 3/50 NOW's show that we are able to…
Parallel implementation of back-propagation algorithm in networks of workstations
TLDR
The analytical and experimental performance shows that the proposed parallel algorithm has better speed-up, less communication time, and better space reduction factor than the earlier algorithm.
On the Performance of Parallel Neural Network Implementations on Distributed Memory Architectures
TLDR
The impact of multi processor memory systems in particular, the distributed memory (DM) and virtual shared memory (VSM), on the implementation of parallel backpropagation neural network algorithms, and how to allow the parallel neural network to choose the optimum number of processors dynamically is studied.
Parallel implementation of multilayered neural networks based on Map-Reduce on cloud computing clusters
TLDR
Experimental results demonstrate that the proposed parallel BP algorithm in this paper has better speedup, faster convergence rate, less iterations than that of the existed algorithms.
On the Performance of Parallel Backpropagation Neural Network Implementations Using CUDA
TLDR
A comparison between the running times taken on the GPU and on the conventional CPU to perform the training of a back-propagation neural network and the results confirm the speed-up advantages by tapping on the resources of GPU.
A Theoretical Framework for Parallel Implementation of Deep Higher Order Neural Networks
TLDR
A new partitioning approach for mapping HONNs to individual computers within a master-slave distributed system (a local area network) and a new learning algorithm is developed so that it can be used for HONN learning in a distributed system environment.
Computational Grid vs. Parallel Computer for Coarse-Grain Parallelization of Neural Networks Training
TLDR
Development of a coarse-grain parallel algorithm of artificial neural networks training with dynamic mapping onto processors of parallel computer system is considered and experiments show better efficiency for computational grid instead of parallelComputer with an efficiency/price criterion.
TRAINING SET PARALLELISM IN PAHRA ARCHITECTURE
TLDR
The Parallel Hybrid Ring Architecture (PAHRA), which is described in this article, provides flexible platform for simulation of multilayered feed-forward neural networks trained with back-propagation algorithm.
Parallel simulation of neural networks on SpiNNaker universal neuromorphic hardware
TLDR
The research investigates how to model large-scale neural networks efficiently on such a parallel machine and shows the feasibility of the approach as well as the performance of SpiNNaker as a general-purpose platform for the simulation of neural networks.
Modeling of feedforward neural network in PAHRA architecture
TLDR
The parallel architecture described in this article provides flexible platform for simulation of multilayered feedforward neural networks trained with back-propagation algorithm and provides mathematical tool for verification of system performance.
...
1
2
...

References

SHOWING 1-10 OF 53 REFERENCES
Parallel simulation of multilayered neural networks on distributed-memory multiprocessors
The backpropagation algorithm on grid and hypercube architectures
Multilayer Neural Networks on Distributed-Memory Multiprocessors
TLDR
The p-processor speed-up of the backpropagation algorithm over a single processor is analyzed theoretically for some popular processor interconnection topologies, which can be used as a basis in determining the most cost-effective or optimal number of processors.
A Scalable Parallel Formulation of the Backpropagation Algorithm for Hypercubes and Related Architectures
TLDR
A new technique for mapping the backpropagation algorithm on hypercube and related architectures using a network partitioning scheme called checkerboarding, which can be combined with the pattern partitioning technique to form a hybrid scheme that performs better than either one of these schemes.
Neural network simulation on a reduced-mesh-of-trees organization
TLDR
This work shows how to simulate ANN's on an SIMD architecture, the Reduced Mesh of Trees (RMOT), which has p PE's and n2 memory arranged in a p x p array of modules (p is a constant less than or equal to n).
EFFICIENT MAPPING OF NEURAL NETWORKS ON MULTICOMPUTERS
TLDR
In this paper, an efficient mapping of multilayer artificial neural networks on multicomputers is formulated and analyzed and a simplified algorithm with negligible error is developed and analyzed.
Mapping Neural Networks onto Message-Passing Multicomputers
Network Learning on the Connection Machine
TLDR
The first implementation of a connectionist learning algorithm, error back-propagation, on a fine-grained parallel computer, the Connection Machine, is discussed, finding the major impediment to further speed-up to be the communications between processors, and not processor speed per se.
Implementing Neural Network Models on Parallel Computers
TLDR
This work reviews the implementation of a range of neural network models on SIMD and MIMD computers, and describes the strategies which have been used to implement the Durbin and Willshaw elastic net model on the Computing Surface.
Implementation of Multilayer Neural Networks on Parallel Programmable Digital Computers
TLDR
A method of implementing neural networks on parallel, programmable computers, which can effectively address the computational requirements of such signal processing applications, and is applicable to multilayer connectionist networks and two-dimensional, SIMD processor arrays.
...
1
2
3
4
5
...