The Randomization Technique as a Modeling Tool and Solution Procedure for Transient Markov Processes

@article{Gross1984TheRT,
  title={The Randomization Technique as a Modeling Tool and Solution Procedure for Transient Markov Processes},
  author={Donald Gross and Douglas R. Miller},
  journal={Oper. Res.},
  year={1984},
  volume={32},
  pages={343-361}
}
We present a randomization procedure for computing transient solutions to discrete state space, continuous time Markov processes. This procedure computes transient state probabilities. It is based on a construction relating a continuous time Markov process to a discrete time Markov chain. Modifications and extensions of the randomization method allow for computation of distributions of first passage times and sojourn times in Markov processes, and also the computation of expected cumulative… 

A new approach combining simulation and randomization for the analysis of large continuous time Markov chains

The new analysis method combines simulation and numerical techniques for the analysis of large Markov chains to avoid the state explosion problem of numerical analysis, making it possible to obtain more accurate results that with pure simulation.

Randomization Procedures in the Computation of Cumulative-Time Distributions over Discrete State Markov Processes

A queueing application of the methodology to delay times in queueing networks is outlined and its efficacy is appraised by comparing the results with a sojourn time for a problem with a known distribution.

An adaptive importance sampling approach for the transient analysis of markovian queueing networks

A new method for the efficient estimation of rare events and small probabilities in Markovian queueing networks using importance sampling to modify the probability distribution of the events to be observed and the change of the measure is computed adaptively.

Empirical Comparison of Uniformization Methods for Continuous-Time Markov Chains

Computation of transient state occupancy probabilities of continuous-time Markov chains is important for evaluating many performance, dependability, and performability models. A number of numerical

Sliding Window Abstraction for Infinite Markov Chains

An on-the-fly abstraction technique for infinite-state continuous -time Markov chains that are specified by a finite set of transition classes, which approximate the transient probability distributions at various time instances by solving a sequence of dynamically constructed abstract models.

A hybrid analysis approach for finite-capacity queues with general inputs and phase type service

A new analysis method for queueing systems with general input stream and phase type service time distributions is introduced. The approach combines discrete event simulation and numerical analysis of

Approximate analysis of non Markovian stochastic systems with multiple time scale delays

  • S. HaddadP. Moreaux
  • Computer Science, Mathematics
    The IEEE Computer Society's 12th Annual International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunications Systems, 2004. (MASCOTS 2004). Proceedings.
  • 2004
This work develops an exact analysis of an approximate model of stochastic discrete event systems which include concurrent activities with multiple time scale finite support distributions and shows that some useful classes of non ergodic systems can be analyzed in an exact way with this method.

A distributed numerical/simulative algorithm for the analysis of large continuous time Markov chains

  • P. Buchholz
  • Computer Science
    Proceedings 11th Workshop on Parallel and Distributed Simulation
  • 1997
A distributed algorithm is introduced for the analysis of large continuous time Markov chains (CTMCs) by combining in some sense numerical solution techniques and simulation, which exploits the possibility of precomputing event times as already proposed for parallel simulation of CTMCs.

A Software Package Tool for Markovian Computing Models with Many States: Principles and its Applications

A software package tool is implemented by using the randomization technique and introducing a new idea of identifying when the transient solution converges to the steady-state solution in advance.
...

References

SHOWING 1-10 OF 34 REFERENCES

Randomization Procedures in the Computation of Cumulative-Time Distributions over Discrete State Markov Processes

A queueing application of the methodology to delay times in queueing networks is outlined and its efficacy is appraised by comparing the results with a sojourn time for a problem with a known distribution.

Reliability calculation using randomization for Markovian fault-tolerant computing systems

The randomization technique for computing transient probabilities of Markov processes is presented and an accelerated version of the randomization algorithm is developed which exploits ''stiffness' of the models to gain increased efficiency.

An Equivalence between Continuous and Discrete Time Markov Decision Processes.

T HE EQUIVALENCE we shall discuss for Markov decision processes is based on the following well known equivalence for Markov processes. Let Y = { Y(t): t 0) be a continuous time Markov process with a

Transient solutions in Markovian Queueing Systems

Comparing Semi-Markov Processes

The construction is accomplished for semi-Markov processes for which all subprobability transition rates are absolutely continuous with failure rates uniformly bounded over finite intervals by representing the two semi- Markov processes as compositions of discrete-time stochastic processes with a sequence of Poisson processes.

The First Passage Time Distribution for a Parallel Exponential System with Repair.

Abstract : In system reliability studies, one obtains via a fault tree analysis, the various combinations of possible events which lead to system failure. These events can be characterized by a

Applying a New Device in the Optimization of Exponential Queuing Systems

A new definition of the time of transition is provided, which is able to utilize the inductive approach in a manner characteristic of inventory theory, and a policy optimal for all sufficiently small discount factors can be obtained from the usual average cost functional equation without recourse to further computation.

Markov Chain Models--Rarity And Exponentiality

0. Introduction and Summary.- 1. Discrete Time Markov Chains Reversibility in Time.- 1.00. Introduction.- 1.0. Notation, Transition Laws.- 1.1. Irreducibility, Aperiodicity, Ergodicity Stationary