Ding-Xuan Zhou

Learn More
(1) A main theme of this report is the relationship of approximation to learning and the primary role of sampling (inductive inference). We try to emphasize relations of the theory of learning to the mainstream of mathematics. In particular, there are large roles for probability theory, for algorithms such as least squares, and for tools and ideas from(More)
This report on learning theory is written in the spirit of: The best understanding of what one can see comes from theories of what one can’t see. This thought has been expressed in a number of ways by different scientists, and is supported everywhere. Obvious choices vary from gravity to economic equilibrium. For learning theory we see its expression in the(More)
Let M ∈ ZZs×s be a dilation matrix and D ⊂ ZZ be a complete set of representatives of distinct cosets of ZZ/MZZ. The self-similar tiling associated with M and D is the subset of IR given by T (M,D) = { ∑∞ j=1M αj : αj ∈ D}. The purpose of this paper is to characterize self-similar lattice tilings, i.e., tilings T (M,D) which have Lebesgue measure one. In(More)
We introduce an algorithm that learns gradients from samples in the supervised learning framework. An error analysis is given for the convergence of the gradient estimated by the algorithm to the true gradient. The utility of the algorithm for the problem of variable selection as well as determining variable covariance is illustrated on simulated data as(More)
I first met René at the well-known 1956 meeting on topology in Mexico City. He then came to the University of Chicago, where I was starting my job as instructor for the fall of 1956. He, Suzanne, Clara and I became good friends and saw much of each other for many decades, especially at IHES in Paris. Thom’s encouragement and support were important for me,(More)
Learning Theory studies learning objects from random samples. The main question is: How many samples do we need to ensure an error bound with certain con ̄dence? To answer this question, the covering numbers or entropy numbers play an essential role, as shown by Vapnik, Poggio, CuckerSmale, and many others. For kernel machine learning such as the Support(More)
This paper considers the regularized learning algorithm associated with the leastsquare loss and reproducing kernel Hilbert spaces. The target is the error analysis for the regression problem in learning theory. A novel regularization approach is presented, which yields satisfactory learning rates. The rates depend on the approximation property and the(More)
Support vector machine soft margin classifiers are important learning algorithms for classification problems. They can be stated as convex optimization problems and are suitable for a large data setting. Linear programming SVM classifier is specially efficient for very large size samples. But little is known about its convergence, compared with the well(More)
The purpose of this paper is to provide a PAC error analysis for the q-norm soft margin classifier, a support vector machine classification algorithm. It consists of two parts: regularization error and sample error. While many techniques are available for treating the sample error, much less is known for the regularization error and the corresponding(More)
A family of classification algorithms generated from Tikhonov regularization schemes are considered. They involve multi-kernel spaces and general convex loss functions. Our main purpose is to provide satisfactory estimates for the excess misclassification error of these multi-kernel regularized classifiers. The error analysis consists of two parts:(More)