#### Filter Results:

#### Publication Year

1989

2017

#### Publication Type

#### Co-author

#### Publication Venue

#### Data Set Used

#### Key Phrases

Learn More

The cross-entropy method is a versatile heuristic tool for solving difficult estimation and optimization problems, based on Kullback–Leibler (or cross-entropy) minimization. As an optimization method it unifies many existing population-based optimization heuristics. In this chapter we show how the cross-entropy method can be applied to a diverse range of… (More)

- BY Z. I. BOTEV, J. F. GROTOWSKI, D. P. KROESE
- 2010

We present a new adaptive kernel density estimator based on linear diffusion processes. The proposed estimator builds on existing ideas for adap-tive smoothing by incorporating information from a pilot density estimate. In addition, we propose a new plug-in bandwidth selection method that is free from the arbitrary normal reference rules used by existing… (More)

Global likelihood maximization is an important aspect of many statistical analyses. Often the likelihood function is highly multi-extremal. This presents a significant challenge to standard search procedures, which often settle too quickly into an inferior local maximum. We present a new approach based on the cross-entropy (CE) method, and illustrate its… (More)

The cross-entropy (CE) method is a new generic approach to combinatorial and multi-extremal optimization and rare event simulation. The purpose of this tutorial is to give a gentle introduction to the CE method. We present the CE methodology, the basic algorithm and its modifications, and discuss applications in combinatorial optimization and machine… (More)

- DIRK P. KROESE
- 2004

The estimation of P(S n > u) by simulation, where S n is the sum of independent, identically distributed random varibles Y 1 ,. .. , Y n , is of importance in many applications. We propose two simulation estimators based upon the identity P(S n > u) = nP(S n > u, M n = Y n), where M n = max(Y 1 ,. .. , Y n). One estimator uses importance sampling (for Y n… (More)

The RESTART method is a widely applicable simulation technique for the estimation of rare event probabilities. The method is based on the idea to restart the simulation in certain system states, in order to generate more occurrences of the rare event. One of the main questions for any RESTART implementation is how and when to restart the simulation, in… (More)

The two-node tandem Jackson network serves as a convenient reference model for the analysis and testing of different methodologies and techniques in rare event simulation. In this paper we consider a new approach to efficiently estimate the probability that the content of the second buffer exceeds some high level <i>L</i> before it becomes empty, starting… (More)

This chapter describes how difficult statistical estimation problems can often be solved efficiently by means of the cross-entropy (CE) method. The CE method can be viewed as an adaptive importance sampling procedure that uses the cross-entropy or Kullback–Leibler divergence as a measure of closeness between two sampling distributions. The CE method is… (More)

We present new theoretical convergence results on the Cross-Entropy method for discrete optimization. Our primary contribution is to show that a popular implementation of the Cross-Entropy method converges, and finds an optimal solution with probability arbitrarily close to 1. We also give necessary conditions and sufficient conditions under which an… (More)

The cross-entropy and minimum cross-entropy methods are well-known Monte Carlo simulation techniques for rare-event probability estimation and optimization. In this paper, we investigate how these methods can be extended to provide a general non-parametric cross-entropy framework based on 1-divergence distance measures. We show how the 2 2 distance, in… (More)