Learn More
In this contribution we describe a novel classification approach for on-line handwriting recognition. The technique combines dynamic time warping (DTW) and support vector machines (SVMs) by establishing a new SVM kernel. We call this kernel Gaussian DTW (GDTW) kernel. This kernel approach has a main advantage over common HMM techniques. It does not assume a(More)
Kernel methods are becoming increasingly popular for various kinds of machine learning tasks, the most famous being the support vector machine (SVM) for classification. The SVM is well understood when using conditionally positive definite (cpd) kernel functions. However, in practice, non-cpd kernels arise and demand application in SVMs. The procedure of(More)
Kernel methods are a class of well established and successful algorithms for pattern analysis thanks to their mathematical elegance and good performance. Numerous nonlinear extensions of pattern recognition techniques have been proposed so far based on the so-called kernel trick. The objective of this paper is twofold. First, we derive an additional kernel(More)
The model order reduction methodology of reduced basis (RB) techniques offers efficient treatment of parametrized partial differential equations (P 2 DEs) by providing both approximate solution procedures and efficient error estimates. RB-methods have so far mainly been applied to finite element schemes for elliptic and parabolic problems. In the current(More)
In many learning problems prior knowledge about pattern variations can be formalized and beneficially incorporated into the analysis system.The corresponding notion of invariance is commonly used in conceptionally different ways. We propose a more distinguishing treatment in particular in the active field of kernel methods for machine learning and pattern(More)
In this work an efficient approach for a-posteriori error estimation for POD-DEIM reduced nonlinear dynamical systems is introduced. The considered nonlinear systems may also include time and parameter-affine linear terms as well as parametrically dependent inputs and outputs. The reduction process involves a Galerkin projection of the full system and(More)
Within the framework of kernel methods, linear data methods have almost completely been extended to their nonlinear counterparts. In this paper, we focus on nonlinear kernel techniques based on the Mahalanobis distance. Two approaches are distinguished here. The first one assumes an invertible covariance operator , while the second one uses a regularized(More)