#### Filter Results:

- Full text PDF available (26)

#### Publication Year

1996

2017

#### Publication Type

#### Co-author

#### Publication Venue

#### Brain Region

#### Key Phrases

#### Method

Learn More

- V. G. SPOKOINY
- 1996

Let a function f be observed with a noise. We wish to test the null hypothesis that the function is identically zero, against a composite nonparametric alternative: functions from the alternative set are sepa-Ž. rated away from zero in an integral e.g., L norm and also possess some 2 smoothness properties. The minimax rate of testing for this problem was… (More)

We develop a new test of a parametric model of a conditional mean function against a nonparametric alternative. The test adapts to the unknown smoothness of the alternative model and is uniformly consistent against alternatives whose distance from the parametric model converges to zero at the fastest possible rate. This rate is slower than n-1/2. Some… (More)

Suppose that one observes a process Y on the unit interval, where dYt = n 1/2 ft dt+dWt with an unknown function parameter f, given scale parameter n ≥ 1 (" sample size ") and standard Brownian motion W. We propose two classes of tests of qualitative nonparametric hypotheses about f such as monotonicity or concavity. These tests are asymptotically optimal… (More)

- Gilles Blanchard, Motoaki Kawanabe, Masashi Sugiyama, Vladimir G. Spokoiny, Klaus-Robert Müller
- Journal of Machine Learning Research
- 2006

Finding non-Gaussian components of high-dimensional data is an important preprocessing step for efficient information processing. This article proposes a new linear method to identify the " non-Gaussian subspace " within a very general semi-parametric framework. Our proposed method, called NGCA (non-Gaussian component analysis), is based on a linear… (More)

- Marian Hristache, Anatoli Juditsky, Jörg Polzehl, Vladimir Spokoiny
- 2001

We propose a new method of effective dimension reduction for a multi-index model which is based on iterative improvement of the family of average derivative estimates. The procedure is computationally straightforward and does not require any prior information about the structure of the underlying model. We show that in the case when the effective dimension… (More)

- Yurii Nesterov, Vladimir G. Spokoiny
- Foundations of Computational Mathematics
- 2017

In this paper, we prove the complexity bounds for methods of Convex Optimization based only on computation of the function value. The search directions of our schemes are normally distributed random Gaussian vectors. It appears that such methods usually need at most n times more iterations than the standard gradient methods, where n is the dimension of the… (More)

- V G Spokoiny
- 1998

The paper is concerned with the problem of image denoising. We consider the case of black-white type images consisting of a nite number of regions with smooth boundaries and the image value is assumed to be piecewise constant within each region. New method of image denoising is proposed which is adaptive (assumption free) to the number of regions and… (More)

This paper is concerned with testing the hypothesis that a conditional median function is linear against a nonparametric alternative with unknown smoothness. We develop a test that is uniformly consistent against alternatives whose distance from the linear model converges to zero at the fastest possible rate. The test accommodates conditional… (More)

- Karsten Tabelow, Jörg Polzehl, Vladimir G. Spokoiny, Henning U. Voss
- NeuroImage
- 2008

Diffusion Tensor Imaging (DTI) data is characterized by a high noise level. Thus, estimation errors of quantities like anisotropy indices or the main diffusion direction used for fiber tracking are relatively large and may significantly confound the accuracy of DTI in clinical or neuroscience applications. Besides pulse sequence optimization, noise… (More)

- V G Spokoiny
- 1996

We propose a method of adaptive estimation of a regression function and which is near optimal in the classical sense of the mean integrated error. At the same time, the estimator is shown to be very sensitive to discontinuities or change-points of the underlying function f or its derivatives. For instance, in the case of a jump of a regression function,… (More)