• Corpus ID: 12927777

A "Measure of Transaction Processing" 20 Years Later

@article{Gray2005AO,
  title={A "Measure of Transaction Processing" 20 Years Later},
  author={Jim Gray},
  journal={IEEE Data Eng. Bull.},
  year={2005},
  volume={28},
  pages={3-4}
}
  • J. Gray
  • Published 1 June 2005
  • Computer Science
  • IEEE Data Eng. Bull.
This article quantifies the price-performance improvements on two standard commercial benchmarks (DebitCredit and Sort) from 1985 to 2005. It shows that improvement has exceeded Moore’s law – largely due to (1) hardware improvements, (2) software improvements, (3) massive parallelism, and (4) changing from mainframe to commodity economics. Price-performance continues to improve faster than Moore’s law but per-processor and peak performance are improving more slowly. The sorting results in… 

Figures from this paper

Concurrency Control for Main Memory Databases 4.2 Related Work 4.3 Multi-version Storage Engine 4.3.1 Version Format 4.3.2 Storage and Indexing
  • Computer Science
TLDR
This chapter investigates what are the appropriate high-performance concurrency control mechanisms for memory-resident OLTP workloads and finds that traditional single-version locking is " fragile".
psort 2011 – pennysort , datamation , joulesort ∗
TLDR
This memo reports the results of the psort (general purpose) sorting software on a number of hardware configurations, including a hand tailored and a cluster version of psort.
Design Trade-offs for a Robust Dynamic Hybrid Hash Join (Extended Version)
TLDR
An experimental and analytical study of the trade-offs in designing a robust and dynamic HHJ operator, revisiting the design and optimization techniques suggested by previous studies through extensive experiments and evaluating different partition insertion techniques to maximize memory utilization with the least CPU cost.
Adaptive query processing: dealing with incomplete and uncertain statistics
TLDR
Several Adaptive Query Processing (AQP) techniques are proposed as alternatives or extensions to the non-adaptive architecture employed by today's commercial database systems to correct or avoid query processing problems due to the use of incorrect and partial information at optimization time.
Physical Database Design: the database professional's guide to exploiting indexes, views, storage, and more
TLDR
Every form of relational database, such as Online Transaction Processing (OLTP), Enterprise Resource Management (ERP), Data Mining (DM), or Management Resource Planning (MRP), can be improved using the methods provided in the book.
Introduction : Two Views of Database Research
TLDR
While standard programming languages provide a rich variety of data structures that can be defined by a user, relational languages require the user to describe data in terms of a very simple table data structure: a collection of attributes, each having values in some scalar datatype.
psort, Yet Another Fast Stable Sorting Software
TLDR
Ppsort's internals are detailed, and the careful fitting of its architecture to the structure of modern PCs-class platforms, allowing it to outperform state-of-the-art sorting software such as GNUsort or STXXL.
Improving the process of analysis and comparison of results in dependability benchmarks for computer systems
TLDR
Inspired on procedures taken from the field of operational research, this methodology provides evaluators with the means not only to make their process of analysis explicit to anyone, but also more representative for the context being.
Rethinking Benchmarking for Data
TLDR
This research presents a meta-modelling architecture that automates the very labor-intensive and therefore time-heavy and expensive process of manually cataloging and benchmarking data to identify the most promising candidates for inclusion in the next generation of smart grids.
...
1
2
...

References

SHOWING 1-5 OF 5 REFERENCES
A measure of transaction processing power
TLDR
These benchmarks measure the performance of diverse transaction processing systems and a standard system cost measure is stated and used to define price/performance metrics.
2005 Performance / Price Sort and PennySort
TLDR
This paper recounts the experience with the Postman's Sort in the PennySort/Daytona competition, a commercial program that aims to test the maximum cost efficiency of sort machines.
Performance / Price Sort and PennySort
TLDR
This paper documents this and proposes that the PennySort benchmark be revised to Performance/Price sort: a simple GB/$ sort metric based on a two-pass external sort.
Sorting on a Cluster Attached to a Storage-Area Network
In November 2004, the SAN Cluster Sort program (SCS) set new records for the Indy versions of the Minute and TeraByte Sorts. SCS ran on a cluster of 40 dual-processor Itanium2 nodes on the show floor
Thousands of DebitCredit Transactions-Per-Second: Easy and Inexpensive
A $2k computer can execute about 8k transactions per second. This is 80x more than one of the largest US bank's 1970's traffic - it approximates the total US 1970's financial transaction volume. Very