Chaos in computer performance

@article{Berry2006ChaosIC,
  title={Chaos in computer performance},
  author={Hugues Berry and Daniel Gracia P{\'e}rez and Olivier Temam},
  journal={Chaos},
  year={2006},
  volume={16 1},
  pages={
          013110
        }
}
Modern computer microprocessors are composed of hundreds of millions of transistors that interact through intricate protocols. Their performance during program execution may be highly variable and present aperiodic oscillations. In this paper, we apply current nonlinear time series analysis techniques to the performances of modern microprocessors during the execution of prototypical programs. Our results present pieces of evidence strongly supporting that the high variability of the performance… 

Figures from this paper

On the Importance of Nonlinear Modeling in Computer Performance Prediction

TLDR
This paper builds linear and nonlinear models of the processor load of an Intel i7-based computer as it executes a range of different programs and uses those models to predict the processor loads forward in time and compares those forecasts to the true continuations of the time series.

Predictable programming on a precision timed architecture

TLDR
A SPARC-based processor with predictable timing and instruction-set extensions that provide precise timing control is described, and the effectiveness of this precision-timed (PRET) architecture is demonstrated through example applications running in simulation.

Overcoming the Intuition Wall: Measurement and Analysis in Computer Architecture

TLDR
This dissertation presents four case studies in quantitative methods, concluding that quantitative methods like the authors' are necessary for the study of side-channel information leaks, and presents a novel method of approximate graph clustering, which enables the mining of program graphs from large code bases.

Measurement and Dynamical Analysis of Computer Performance Data

TLDR
It is demonstrated that using nonlinear time series analysis techniques on computer performance data is sound, and the results of blindly applying these techniques to computer performanceData when they do not validate their assumptions are examined.

Reynolds’ dream?

For many non-linear phenomena, it is necessary to solve infinite systems of equations for correlation functions with a wide range of parameters. This paper can be seen as a first step in addressing

Autonomy of the internet: Complexity of flow dynamics in a packet switching network

TLDR
The complexity of mutually crossing packet flows which are comparable to other autonomous complex networks, such as real Hippocampus slices, Izhikevich neural networks, or the game of life are reported on.

Evolution of 2-Dimensional Cellular Automata as Pseudo-random Number Generators

TLDR
A composite fitness metric is introduced, that incorporates elements from PRNG tests, to be used in the evolution of the CAs in the scope of the newer and more demanding batteries of pseudo-random generator tests.

Using PredictiveModeling for Cross-Program Design Space Exploration in Multicore Systems

TLDR
This paper builds models that, given only a minute fraction of the design space, are able to accurately predict the behavior of the remaining designs orders of magnitude faster than simulating them, using predictive modeling, a well-known machine learning technique.

A New Hyperchaotic Map for a Secure Communication Scheme with an Experimental Realization

TLDR
A new 2D chaotic map, namely, the 2D infinite-collapse-Sine model (2D-ICSM), which has a high sensitivity to initial values and parameters, extreme complexity performance, and a much larger hyperchaotic range than existing maps is introduced.

References

SHOWING 1-10 OF 68 REFERENCES

Detection of Chaos and Fractals from Experimental Time Series

TLDR
The theory of nonlinear dynamics, especially that of (low-dimensional) chaos and fractals, is changing this traditional strategy of studying dynamics of nervous system levels from a single neuron to the whole brain.

HAVEGE: A user-level software heuristic for generating empirically strong random numbers

TLDR
This article presents and analyze HAVEGE (HArdware Volatile Entropy Gathering and Expansion), a new user-level software heuristic to generate practically strong random numbers on general-purpose computers and shows how this entropy gathering technique can be combined with pseudorandom number generation in HAVEGE.

Recent extensions to the SimpleScalar tool suite

TLDR
The extensions and improvements to the SimpleScalar Tool suite are described, which include the capability to simulate more instruction sets, graphical support for performance viewing, and more simulators that model different types of machines.

The Fuzzy Correlation between Code and Performance Predictability

TLDR
The results show that for most server workloads and, surprisingly, even for CPU2K benchmarks, the accuracy of predicting CPI from EIPs varies widely, and a new methodology is proposed that selects the best-suited sampling technique to accurately capture the program behavior.

Quantification of scaling exponents and crossover phenomena in nonstationary heartbeat time series.

TLDR
A new method--detrended fluctuation analysis (DFA)--for quantifying this correlation property in non-stationary physiological time series is described and application of this technique shows evidence for a crossover phenomenon associated with a change in short and long-range scaling exponents.

Computability with Low-Dimensional Dynamical Systems

...