What is scalability?

@article{Hill1990WhatIS,
  title={What is scalability?},
  author={Mark D. Hill},
  journal={SIGARCH Comput. Archit. News},
  year={1990},
  volume={18},
  pages={18-21}
}
  • M. Hill
  • Published 2 December 1990
  • Computer Science
  • SIGARCH Comput. Archit. News
Scalability is a frequently-claimed attribute of multiprocessor systems. While the basic notion is intuitive, scalability has no generally-accepted definition. For this reason, current use of the term adds more to marketing potential than technical insight.In this paper, I first examine formal definitions of scalability, but I fail to lind a useful, rigorous definition of it. I then question whether scalability is useful and conclude by challenging the technical community to either (1… 

A framework for characterization and analysis of software system scalability

A framework for precisely characterizing and analyzing the scalability of a software system is presented, which treats scalability as a multi-criteria optimization problem and captures the dependency relationships that underlie typical notions of scalability.

A framework for the characterization and analysis of software systems scalability

This thesis provides a definition of scalability and describes a systematic framework for the characterization and analysis of software systems scalability that is validated against a real-world data analysis system and used to recast a number of examples taken from the computing literature and from industry in order to demonstrate its use across different application domains and system designs.

A framework for modelling and analysis of software systems scalability

This research investigates commonly found definitions of scalability and attempts to capture its essence in a systematic framework to restore the usefulness of the term.

Robust scalability analysis and SPM case studies

A generic definition of scalability is introduced and evaluated for programs developed under the super-programming model (SPM) for PC clusters, a rather difficult subject due to their long communication latencies.

Extending the scalable coherent interface for large-scale shared-memory multiprocessors

This dissertation investigates ways to efficiently share frequently changing data among thousands of processors using Scalable Coherent Interface (SCI), and investigates two new cache-coherence protocols that employ trees of cache lines and have similar or lower latency than SCI.

Scalable, parallel computers: Alternatives, issues, and challenges

  • G. Bell
  • Computer Science
    International Journal of Parallel Programming
  • 2007
A taxonomy and evolutionary time line outlines the next decade of computer evolution, included distributed workstations, based on scalability and parallelism, and concludes that Workstations can be the best scalables.

The Searching Scalability of Peer-to-Peer System

The hybrid peer-to-peer model is proposed as an improvement in searching so that every query by peer should be able to be independent from the total number of peers, and make the system expandable without the addition of incoming or outgoing messages.

Token Coherence: decoupling performance and correctness

TokenB, a specific Token Coherence performance protocol that allows a glueless multiprocessor to both exploit a low latency unordered interconnect and avoid indirection and can significantly outperform traditional snooping and directory protocols.

Experimental evaluation of horizontal and vertical scalability of cluster-based application servers for transactional workloads

This evaluation work compares the scalability and other related performance metrics when an application server cluster is scaled horizontally, adding new servers, and when it is scaled vertically, adding cores into the servers.

Toward the design of large-scale shared-memory multiprocessors

This thesis addresses the scalability of shared-memory multiprocessors by presenting a practical treatment of scalability, and proceeding to focus on aspects of two critical areas of large-scale system design: interconnection networks and cache coherence mechanisms.
...

References

SHOWING 1-10 OF 19 REFERENCES

Speedup Versus Efficiency in Parallel Systems

The tradeoff between speedup and efficiency that is inherent to a software system is investigated and it is shown that for any software system and any number of processors, the sum of the average processor utilization and the attained fraction of the maximum possible speedup must exceed one.

Synchronization algorithms for shared-memory multiprocessors

A performance evaluation of the Symmetry multiprocessor system revealed that the synchronization mechanism did not perform well for highly contested locks, like those found in certain parallel

On parallel searching (Extended Abstract)

  • M. Snir
  • Computer Science
    PODC '82
  • 1982
The complexity of seaching by comparisons a table of n elements on a synchronous, shared memory parallel computer with p processors is investigated and it is shown that it is possible to search in O(lg(n)/p) steps if more general operations are used.

A Benchmark Parallel Sort for Shared Memory Multiprocessors

The first parallel sort algorithm for shared memory MIMD (multiple-instruction-multiple-data-stream) multiprocessors that has a theoretical and measured speedup near linear is exhibited. It is based

Measuring parallel processor performance

A new metric that has some advantages over the others is introduced that is illustrated with data from the Linpack benchmark report and the winners of the Gordon Bell Award.

Relations between concurrent-write models of parallel computation

By fixing the number of processors and parametrizing theNumber of shared memory cells, tight separation results between the models are obtained, thereby partially answering open questions of Vishkin [V].

Highly parallel computing

Part 1 Foundations: overview - overview and scope of this book, definition and driving forces, questions raised, emerging answers, previous attempts why success now?, conclusions and future

Parallelism in random access machines

A model of computation based on random access machines operating in parallel and sharing a common memory is presented and can accept in polynomial time exactly the sets accepted by nondeterministic exponential time bounded Turing machines.

Benchmarks for LAN performance evaluation

A technique for quickly benchmarking the performance of local area networks (LANs) is presented. Programs which model both intermittent and constant network activity are given.

The NYU Ultracomputer—Designing an MIMD Shared Memory Parallel Computer

We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network