Corpus ID: 18359190

Parallel Computing: Performance Metrics and Models

@inproceedings{Sahni1995ParallelCP,
  title={Parallel Computing: Performance Metrics and Models},
  author={S. Sahni and V. Thanvantri},
  year={1995}
}
We review the many performance metrics that have been proposed for parallel systems (i.e., program { architecture combinations). These include the many variants of speedup, eeciency, and isoeeciency. We give reasons why none of these metrics should be used independent of the run time of the parallel system. The run time remains the dominant metric and the remaining metrics are important only to the extent they favor systems with better run time. We also lay out the minimum requirements that a… Expand
Parallel Computing in Java
TLDR
The analysis of the results suggest that Java can achieve near similar performance to natively compiled languages, both for sequential and parallel applications, thus making it a viable alternative for HPC programming. Expand
The relation of scalability and execution time
  • Xian-he Sun
  • Computer Science
  • Proceedings of International Conference on Parallel Processing
  • 1996
TLDR
Experimental and theoretical results show that scalability is an important, distinct metric forallel and distributed systems, and may be as important as execution time in a scalable parallel and distributed environment. Expand
Parallel k means Clustering Algorithm on SMP
TLDR
This paper aims to design and implement a parallel k-means clustering algorithm on shared memory multiprocessors using parallel java library and presents analytical results for the parallel program performance metrics. Expand
ZRAM: a library of parallel search algorithms and its use in enumeration and combinatorial optimization
TLDR
The work on ZRAM has clarified what properties the authors require of a parallel search library and demonstrates that a four-layered structure (applications, search engines, common services, host systems) is a suitable architecture. Expand
Profiling of SCOOP Programs Master Thesis
TLDR
ASCOOP profiler is developed to help users understand why a particular SCOOP program is running slowly, discover the bottlenecks, and modify the code to improve the run time. Expand
Profiling of SCOOP programs
TLDR
ASCOOP profiler is developed to help users understand why a particular SCOOP program is running slowly, discover the bottlenecks, and modify the code to improve the run time. Expand
Designing Reliable Communication for Heterogeneous Computer Systems
This study describes the network design solution to the problem of connecting heterogeneous computer systems based on analysis of multipartite hypergraphs. To do this proposes a mathematical model ofExpand
Average Bandwidth Relevance in Parallel Solving Systems of Linear Equations Liviu
This paper presents some experimental results obtained on a parallel computer IBM Blue Gene /P that shows the average bandwidth reduction [11] relevance in the serial and parallel cases of gaussianExpand
A Parallel Heuristic for Bandwidth Reduction Based on Matrix Geometry
TLDR
Experimental results obtained on an IBM Blue Gene/P supercomputer illustrate the fact that the proposed parallel heuristic leads to better results, with respect to time efficiency, speedup, efficiency and quality of solution, in comparison with serial variants and of course in relation with other reported results. Expand
CONTROLLING THE PARALLEL EXECUTION OF WORKFLOWS RELYING ON A DISTRIBUTED DATABASE
of Dissertation presented to COPPE/UFRJ as a partial fulfillment of the requirements for the degree of Master of Science (M.Sc.) CONTROLLING THE PARALLEL EXECUTION OF WORKFLOWS RELYING ON AExpand
...
1
2
...

References

SHOWING 1-10 OF 53 REFERENCES
Toward a better parallel performance metric
TLDR
Theoretical and experimental results show that the most commonly used performance metric, parallel speedup, is 'unfair', in that it favors slow processors and poorly coded programs. Expand
Analyzing Scalability of Parallel Algorithms and Architectures
TLDR
The objectives of this paper are to critically assess the state of the art in the theory of scalability analysis, and to motivate further research on the development of new and more comprehensive analytical tools to study the scalability of parallel algorithms and architectures. Expand
Accurate Predictions of Parallel Program Execution Time
TLDR
This work introduces a methodology for applying a simple performance model based on Amdahl′s law, which accurately quantify the scalability of a specific algorithm when it is run on a specific parallel computer. Expand
Modeling the Serial and Parallel Fractions of a Parallel Algorithm
TLDR
A general model of parallel performance is introduced that provides a more complete characterization of parallel algorithm behavior and is used to correct apparent deficiencies in the formulation of speedup as expressed by Amdahl's model. Expand
Shared virtual memory and generalized speedup
  • Xian-he Sun, J. Zhu
  • Computer Science
  • Proceedings of 8th International Parallel Processing Symposium
  • 1994
TLDR
Experimental and theoretical results show that the generalized speedup is distinct from the traditional speedup and provides a more reasonable measurement. Expand
Scalability of Parallel Algorithm-Machine Combinations
TLDR
Theoretical results show that a large class of algorithm-machine combinations is scalable and the scalability can be predicted through premeasured machine parameters, and a harmony between speedup and scalability has been observed. Expand
The APRAM: incorporating asynchrony into the PRAM model
TLDR
The PRAM model provides an abstraction that strips away problems of synchronization, reliability and communication delays, thereby permitting algorithm designers to focus first and foremost on the structure of the computational problem at hand, rather than the architecture of a currently available machine. Expand
Models and resource metrics for parallel and distributed computation
  • Zhiyong Li, P. Mills, J. Reif
  • Computer Science
  • Proceedings of the Twenty-Eighth Annual Hawaii International Conference on System Sciences
  • 1995
TLDR
A new parallel computation model is presented, the LogP-HMM model, which extends an existing parameterized network model with a sequential hierarchical memory model (HMM) characterizing each processor, and accurately captures both network communication costs and the effects of multilevel memory, such as local cache and I/O. Expand
Scalable Problems and Memory-Bounded Speedup
TLDR
The simplified memory-bounded speedup contains both Amdahl′s law and Gustafson′s scaled speedup as special cases and leads to a better understanding of parallel processing. Expand
Another view on parallel speedup
TLDR
Three models of parallel speedup are studied: fixed-size speed up, fixed-time speedup, and memory-bounded speedup; a metric for performance evaluation is proposed. Expand
...
1
2
3
4
5
...