Chii-Ruey Hwang

Learn More
We seek a global minimum of U:[0, 1]"-. R. The solution to (d/dt)x,=-VU(xt) will find local minima. The solution to dxt =-V U(xt) dt+dw,, where w is standard (n-dimensional) Brownian motion and the boundaries are reflecting, will concentrate near the global minima of U, at least when "temperature" T is small: the equilibrium distribution for xt is Gibbs(More)
Let U be a given function defined on R d and π(x) be a density function proportional to exp −U (x). The following diffusion X(t) is often used to sample from π(x), dX(t) = −∇U (X(t)) dt + √ 2 dW (t), X(0) = x0. To accelerate the convergence, a family of diffusions with π(x) as their common equilibrium is considered, dX(t) = (−∇U (X(t)) + C(X(t))) dt + √ 2(More)
For a given set of observations, we consider the waiting times between successive returns to extreme values. Our main result is an invariance theorem that says that, as the size of the data set gets large, the empirical distribution of the waiting time converges with probability one to a geometric distribution, whenever the observations are i.i.d. or, more(More)
Starting from a robust, nonparametric definition of large returns ("excursions"), we study the statistics of their occurrences, focusing on the recurrence process. The empirical waiting-time distribution between excursions is remarkably invariant to year, stock, and scale (return interval). This invariance is related to self-similarity of the marginal(More)
Statistical analysis based on two characteristics of a small-world network, and on Lempel-Ziv's measure of Kolmogorov-Chaitin's algorithmic complexity are first proposed to scan through an individual behavioral sequence for possible existence of non-stationarity. Due to fixed window width, these tests have drawbacks in mapping out regions of(More)
Many kernel-based learning algorithms have the computational load scaled with the sample size n due to the column size of a full kernel Gram matrix K. This article considers the Nyström low-rank approximation. It uses a reduced kernel ?̂?, which is n×m, consisting of m columns (say columns i1, i2,···, im) randomly drawn from K. This approximation takes the(More)
The number of equilibrium states ( xed points) is an important issue in the study of the dynamics of neural networks and statistical physics. The equilibrium states correspond to stored patterns (memory). The point of the dynamics is to recover a stored pattern when given a distorted pattern as an intial condition. In other words the initial network state(More)
• Many kernel-based learning algorithms have the computational load. • The Nyström low-rank approximation is designed for reducing the computation. • We propose the spectrum decomposition condition with a theoretical justification. • Asymptotic error bounds on eigenvalues and eigenvectors are derived. • Numerical experiments are provided for covariance(More)