Learn More
We introduce a new formalism for computing expectations of functionals of arbitrary random vectors, by using generalised integration by parts formulae. In doing so we extend recent representation formulae for the score function introduced in [19] and also provide a new proof of a central identity first discovered in [7]. We derive a representation for the(More)
Let X1, X2, . . . , Xn be independent random variables uniformly distributed on [0, 1]. We observe these sequentially and have to stop on exactly one of them. No recall of preceding observations is permitted. What stopping rule minimizes the expected rank of the selected observation? What is the value of the expected rank (as a function of n) and what is(More)
In the first part of the paper we use a new Fourier technique to obtain a Stein characterizations for random variables in the second Wiener chaos. We provide the connection between this result and similar conclusions that can be derived using Malliavin calculus. We also introduce a new form of discrepancy which we use, in the second part of the paper, to(More)
We propose a new general version of Stein’s method for univariate distributions. In particular we propose a canonical definition of the Stein operator of a probability distribution which is based on a linear difference or differential-type operator. The resulting Stein identity highlights the unifying theme behind the literature on Stein’s method (both for(More)
Classical estimation techniques for linear models either are inconsistent, or perform rather poorly, under αstable error densities; most of them are not even rate-optimal. In this paper, we propose an original one-step R-estimation method and investigate its asymptotic performances under stable densities. Contrary to traditional least squares, the proposed(More)
Pinsker's inequality states that the relative entropy between two random variables X and Y dominates the square of the total variation distance between X and Y. In this paper, we introduce generalized Fisher information distances and prove that these also dominate the square of the total variation distance. To this end, we introduce a general discrete Stein(More)
Gauss’ principle states that the maximum likelihood estimator of the parameter in a location family is the sample mean for all samples of all sample sizes if and only if the family is Gaussian. There exist many extensions of this result in diverse directions. In this paper we propose a unified treatment of this literature. In doing so we define the(More)