Learn More
Statistical learning is emerging as a promising field where a number of algorithms from machine learning are interpreted as statistical methods and vice–versa. Due to good practical performance, boosting is one of the most studied machine learning techniques. We propose algorithms for multivariate density estimation and classification. They are generated by(More)
In this paper we propose a simple multistep regression smoother which is constructed in an iterative manner, by learning the Nadaraya-Watson estimator with L 2 boosting. We find, in both theoretical analysis and simulation experiments, that the bias converges exponentially fast, and the variance diverges exponentially slow. The first boosting step is(More)
We consider data of the form (x i , y i) in which both x and y lie on the circle or sphere, and we seek to model a relationship in which y can be predicted from x. Examples of such data are modelling of tectonic plate movement, and development of an electromagnetic motion-tracking system which can be used to track orientation and position of a sensor moving(More)
Confidence intervals for densities built on the basis of standard nonparametric theory are doomed to have poor coverage rates due to bias. Studies on coverage improvement exist, but reasonably behaved interval estimators are needed. We explore the use of small bias kernel-based methods to construct confidence intervals, in particular using a geometric(More)
() Measuring the quality of determined protein structures is a very important problem in bioin-formatics. Kernel density estimation is a well-known nonparametric method which is often used for exploratory data analysis. Recent advances, which have extended previous linear methods to multi-dimensional circular data, give a sound basis for the analysis of(More)
  • 1