Alexander V. Nazin

Learn More
We consider a recursive algorithm to construct an aggregated esti-mator from a finite number of base decision rules in the classification problem. The estimator approximately minimizes a convex risk functional under the ℓ 1-constraint. It is defined by a stochastic version of the mirror descent algorithm (i.e., of the method which performs gradient descent(More)
— The problem of finding the eigenvector corresponding to the largest eigenvalue of a stochastic matrix has numerous applications in ranking search results, multi-agent consensus, networked control and data mining. The well-known power method is a typical tool for its solution. However randomized methods could be competitors vs standard ones; they require(More)
The direct weight optimization (DWO) approach is a method for finding optimal function estimates via convex optimization, applicable to nonlinear system identification. In this paper, an extended version of the DWO approach is introduced. A general function class description — which includes several important special cases — is presented, and different(More)
A general framework for estimating nonlinear functions and systems is described and analyzed in this paper. Identification of a system is seen as estimation of a predictor function. The considered predictor function estimate at a particular point is defined to be affine in the observed outputs, and the estimate is defined by the weights in this expression.(More)
We consider the problem of constructing an aggregated estimator from a finite class of base functions which approximately minimizes a convex risk functional under the ℓ 1 constraint. For this purpose, we propose a stochastic procedure, the mirror descent, which performs gradient descent in the dual space. The generated estimates are additionally averaged in(More)
The concept of Just-in-Time models has been introduced for models that are not estimated until they are really needed. The prediction is taken as a weighted average of neighboring points in the regressor space, such that an optimal bias/variance trade-off is achieved. The asymptotic properties of the method are investigated, and are compared to the(More)