#### Filter Results:

#### Publication Year

1998

2016

#### Co-author

#### Key Phrase

#### Publication Venue

Learn More

The covering number of a ball of a reproducing kernel Hilbert space as a subset of the continuous function space plays an important role in Learning Theory. We give estimates for this covering number by means of the regularity of the Mercer kernel K: For convolution type kernels Kðx; tÞ ¼ kðx À tÞ on ½0; 1 n ; we provide estimates depending on the decay of… (More)

Support vector machine soft margin classifiers are important learning algorithms for classification problems. They can be stated as convex optimization problems and are suitable for a large data setting. Linear programming SVM classifier is specially efficient for very large size samples. But little is known about its convergence, compared with the well… (More)

We introduce an algorithm that learns gradients from samples in the supervised learning framework. An error analysis is given for the convergence of the gradient estimated by the algorithm to the true gradient. The utility of the algorithm for the problem of variable selection as well as determining variable covariance is illustrated on simulated data as… (More)

The purpose of this paper is to provide a PAC error analysis for the q-norm soft margin classifier, a support vector machine classification algorithm. It consists of two parts: reg-ularization error and sample error. While many techniques are available for treating the sample error, much less is known for the regularization error and the corresponding… (More)

This paper considers the regularized learning algorithm associated with the least-square loss and reproducing kernel Hilbert spaces. The target is the error analysis for the regression problem in learning theory. A novel regularization approach is presented, which yields satisfactory learning rates. The rates depend on the approximation property and the… (More)

A family of classification algorithms generated from Tikhonov regularization schemes are considered. They involve multi-kernel spaces and general convex loss functions. Our main purpose is to provide satisfactory estimates for the excess misclassification error of these multi-kernel regularized classifiers. The error analysis consists of two parts:… (More)

Let M ∈ Z Z s×s be a dilation matrix and D ⊂ Z Z s be a complete set of representatives of distinct cosets of Z Z s /M Z Z s. The self-similar tiling associated with M and D is the subset of IR s given by T (M, D) = { ∞ j=1 M −j α j : α j ∈ D}. The purpose of this paper is to characterize self-similar lattice tilings, i.e., tilings T (M, D) which have… (More)