• Corpus ID: 12927777

A "Measure of Transaction Processing" 20 Years Later

@article{Gray2005AO,
  title={A "Measure of Transaction Processing" 20 Years Later},
  author={Jim Gray},
  journal={IEEE Data Eng. Bull.},
  year={2005},
  volume={28},
  pages={3-4}
}
  • J. Gray
  • Published 1 June 2005
  • Computer Science
  • IEEE Data Eng. Bull.
This article quantifies the price-performance improvements on two standard commercial benchmarks (DebitCredit and Sort) from 1985 to 2005. It shows that improvement has exceeded Moore’s law – largely due to (1) hardware improvements, (2) software improvements, (3) massive parallelism, and (4) changing from mainframe to commodity economics. Price-performance continues to improve faster than Moore’s law but per-processor and peak performance are improving more slowly. The sorting results in… 

Figures and Topics from this paper

Concurrency Control for Main Memory Databases 4.2 Related Work 4.3 Multi-version Storage Engine 4.3.1 Version Format 4.3.2 Storage and Indexing
A database system optimized for in-memory storage can support much higher transaction rates than current systems. However, standard con-currency control methods used today do not scale to the high
psort 2011 – pennysort , datamation , joulesort ∗
This memo reports the results of our psort (general purpose) sorting software on a number of hardware configurations. “Vanilla” psort sorted 10GB for 2122 joules on a Nokia N900 smartphone with an
Design Trade-offs for a Robust Dynamic Hybrid Hash Join (Extended Version)
TLDR
An experimental and analytical study of the trade-offs in designing a robust and dynamic HHJ operator, revisiting the design and optimization techniques suggested by previous studies through extensive experiments and evaluating different partition insertion techniques to maximize memory utilization with the least CPU cost.
Adaptive query processing: dealing with incomplete and uncertain statistics
TLDR
Several Adaptive Query Processing (AQP) techniques are proposed as alternatives or extensions to the non-adaptive architecture employed by today's commercial database systems to correct or avoid query processing problems due to the use of incorrect and partial information at optimization time.
Physical Database Design: the database professional's guide to exploiting indexes, views, storage, and more
TLDR
Every form of relational database, such as Online Transaction Processing (OLTP), Enterprise Resource Management (ERP), Data Mining (DM), or Management Resource Planning (MRP), can be improved using the methods provided in the book.
Introduction : Two Views of Database Research
ion Abstraction is a key principle in every area of computer science – shielding people or programs that make use of a particular software artifact from knowing details that are “internal”. In the
psort, Yet Another Fast Stable Sorting Software
TLDR
Ppsort's internals are detailed, and the careful fitting of its architecture to the structure of modern PCs-class platforms, allowing it to outperform state-of-the-art sorting software such as GNUsort or STXXL.
Improving the process of analysis and comparison of results in dependability benchmarks for computer systems
TLDR
Inspired on procedures taken from the field of operational research, this methodology provides evaluators with the means not only to make their process of analysis explicit to anyone, but also more representative for the context being.
Multi-criteria analysis of measures in benchmarking: Dependability benchmarking as a case study
TLDR
The proposed approach is limited to dependability benchmarks in this document, but its usefulness for any type of benchmark seems quite evident attending to the general formulation of the provided solution.
Rethinking Benchmarking for Data
TLDR
This research presents a meta-modelling architecture that automates the very labor-intensive and therefore time-heavy and expensive process of manually cataloging and benchmarking data to identify the most promising candidates for inclusion in the next generation of smart grids.
...
1
2
...

References

SHOWING 1-6 OF 6 REFERENCES
A measure of transaction processing power
TLDR
These benchmarks measure the performance of diverse transaction processing systems and a standard system cost measure is stated and used to define price/performance metrics.
2005 Performance / Price Sort and PennySort
TLDR
This paper recounts the experience with the Postman's Sort in the PennySort/Daytona competition, a commercial program that aims to test the maximum cost efficiency of sort machines.
Performance / Price Sort and PennySort
TLDR
This paper documents this and proposes that the PennySort benchmark be revised to Performance/Price sort: a simple GB/$ sort metric based on a two-pass external sort.
Sorting on a Cluster Attached to a Storage-Area Network
In November 2004, the SAN Cluster Sort program (SCS) set new records for the Indy versions of the Minute and TeraByte Sorts. SCS ran on a cluster of 40 dual-processor Itanium2 nodes on the show floor
Thousands of DebitCredit Transactions-Per-Second: Easy and Inexpensive
A $2k computer can execute about 8k transactions per second. This is 80x more than one of the largest US bank's 1970's traffic - it approximates the total US 1970's financial transaction volume. Very
A Measure of Transaction Processing Power
  • Datamation, 1 April, 1985, also at: http://research.microsoft.com/~gray/papers/AMeasureOf TransactionProcessingPower.doc
  • 1985