Using Randomization to Break the Curse of Dimensionality

  title={Using Randomization to Break the Curse of Dimensionality},
  author={John Rust},
  • John Rust
  • Published 1 May 1997
  • Mathematics, Computer Science
  • Econometrica
This paper introduces random versions of successive approximations and multigrid algorithms for computing approximate solutions to a class of finite and infinite horizon Markovian decision problems. The author proves that these algorithms succeed in breaking the 'curse of dimensionality' for a subclass of Markovian decision problems known as discrete decision processes. 

Figures from this paper

A Comment on "Using Randomization to Break the Curse of Dimensionality"
Rust (1997) discovered a class of dynamic programs that can be solved in polynomial time with a randomized algorithm. I show that this class is limited, as it requires all but a vanishingly small
On the limits of using randomness to break a dynamic program's curse of dimensionality
Rust (1997) discovered a class of dynamic programs that can be solved in polynomial time with a randomized algorithm, but this class is more limited than initially thought, as it requires all but a vanishingly small fraction of state variables to behave arbitrarily similarly to i.d. uniform random variables.
Randomly Sampling Actions In Dynamic Programming
  • C. Atkeson
  • Computer Science
    2007 IEEE International Symposium on Approximate Dynamic Programming and Reinforcement Learning
  • 2007
We describe an approach towards reducing the curse of dimensionality for deterministic dynamic programming with continuous actions by randomly sampling actions while computing a steady state value
No Curse of Dimensionality for Contraction Fixed Points Even in the Worst Case
This paper proves that there exist deterministic algorithms for computing approximations to fixed points for some classes of quasilinear contraction mappings which are strongly tractable, i.e., in the worst case the number of function evaluations needed to compute an e-approximation to the solution at any finite number of points in its domain is bounded by C/e^p.
A simulation-based approach to stochastic dynamic programming
A simulation-based approach to stochastic dynamic programming that works in both continuous and discrete state and decision spaces while avoiding discretization errors that plague traditional methods is developed.
Randomness Does Not Break the Curse of Dimensionality Unless the Dynamic Program Is Trivial
  • R. Bray
  • Economics
    SSRN Electronic Journal
  • 2019
Rust (1997) developed a randomized algorithm that can solve discrete-choice dynamic programs to any degree of accuracy in polynomial time, under some assumptions. Rust believed these assumptions to
The curse of instability
This commentary argues that thinking on a different level helps to understand, why the authors face the curse of dimensionality, and claims that the Curse of instability is a strong indicator for analytical difficulties and multiscale complexity.
Fitted Value Function Iteration with Probability One Contractions
Optimal Approximation Schedules for a Class of Iterative Algorithms, With an Application to Multigrid Value Iteration
This paper shows that, for linearly convergent algorithm, the optimal rate of refinement approaches the rate of convergence of the exact algorithm itself, regardless of the tolerance-complexity relationship.
Optimal stopping of Markov processes: Hilbert space theory, approximation algorithms, and an application to pricing high-dimensional financial derivatives
The authors propose a stochastic approximation algorithm that tunes weights of a linear combination of basis functions in order to approximate a value function and prove that this algorithm converges and that the limit of convergence has some desirable properties.


Asynchronous stochastic approximation and Q-learning
The Q-learning algorithm, a reinforcement learning method for solving Markov decision problems, is studied to establish its convergence under conditions more general than previously available.
The complexity of dynamic programming
An optimal multigrid algorithm for continuous state discrete time stochastic control
A multigrid version of the successive approximation algorithm whose requirements are within a constant factor from the lower bounds when a certain mixing condition is satisfied is provided, and the algorithm is optimal.
Accuracy Estimates for a Numerical Approach to Stochastic Growth Models
A discretized version of the dynamic programming algorithm is developed and it is shown that under the proposed scheme the computed value function converging quadratically to the true value function and the computed policy function converges linearly, as the mesh size of the discretization converges to zero.
Polynomial approximation—a new computational technique in dynamic programming: Allocation processes
In principle, this equation can be solved computationally using the same technique that applies so well to (1.3). In practice (see [1] for a discussion), questions of time and accuracy arise. There
Discretizing dynamic programs
  • B. Fox
  • Computer Science, Mathematics
  • 1973
Discretizing certain discrete-time, uncountable-state dynamic programs such that the respective solutions to a sequence of discretized versions converge uniformly to the solution of the original
Asymptotics via Empirical Processes
This paper offers a glimpse into the theory of empirical processes. Two asymptotic problems are sketched as motivation for the study of maximal inequalities for stochastic processes made up of
On irregularities of distribution of real sequences.
  • F. Chung, R. Graham
  • Mathematics
    Proceedings of the National Academy of Sciences of the United States of America
  • 1981
A natural measure of the amount of unavoidable clustering that must occur in any bounded infinite sequence of real numbers is studied and sequences that achieve this value are exhibited.
Information-based complexity
Information-based complexity seeks to develop general results about the intrinsic difficulty of solving problems where available information is partial or approximate and to apply these results to
Convergence of discretization procedures in dynamic programming
This short paper considers a discretization procedure often employed in practice and shows that the solution of the discretized algorithm converges to the Solution of the continuous algorithm, as theDiscretization grids become finer and finer.