Learn More
The best understanding of what one can see comes from theories of what one can't see. This thought has been expressed in a number of ways by different scientists, and is supported everywhere. Obvious choices vary from gravity to economic equilibrium. For learning theory we see its expression in the focus on the regression function defined by an unknown(More)
Preamble I first met René at the well-known 1956 meeting on topology in Mexico City. He then came to the University of Chicago, where I was starting my job as instructor for the fall of 1956. He, Suzanne, Clara and I became good friends and saw much of each other for many decades, especially at IHES in Paris. Thom's encouragement and support were important(More)
We introduce an algorithm that learns gradients from samples in the supervised learning framework. An error analysis is given for the convergence of the gradient estimated by the algorithm to the true gradient. The utility of the algorithm for the problem of variable selection as well as determining variable covariance is illustrated on simulated data as(More)
This paper considers the regularized learning algorithm associated with the least-square loss and reproducing kernel Hilbert spaces. The target is the error analysis for the regression problem in learning theory. A novel regularization approach is presented, which yields satisfactory learning rates. The rates depend on the approximation property and the(More)
The covering number of a ball of a reproducing kernel Hilbert space as a subset of the continuous function space plays an important role in Learning Theory. We give estimates for this covering number by means of the regularity of the Mercer kernel K: For convolution type kernels Kðx; tÞ ¼ kðx À tÞ on ½0; 1Š n ; we provide estimates depending on the decay of(More)
Support vector machine soft margin classifiers are important learning algorithms for classification problems. They can be stated as convex optimization problems and are suitable for a large data setting. Linear programming SVM classifier is specially efficient for very large size samples. But little is known about its convergence, compared with the well(More)
The purpose of this paper is to provide a PAC error analysis for the q-norm soft margin classifier, a support vector machine classification algorithm. It consists of two parts: reg-ularization error and sample error. While many techniques are available for treating the sample error, much less is known for the regularization error and the corresponding(More)