Learn More
While classical kernel-based learning algorithms are based on a single kernel, in practice it is often desirable to use multiple kernels. Lanckriet et al. (2004) considered conic combinations of kernel matrices for classification, leading to a convex quadratically constrained quadratic program. We show that it can be rewritten as a semi-infinite linear(More)
Main idea: search for anomalies in the data without training on the clean data. Advantages: no need for training, no need for extensive amount of clean data. Reproduce the state-of-the-art results on the KDD Cup (DARPA '98) dataset (with the main focus on one-class SVM). Investigate the methods from the machine learning point of view. Investigate the(More)
Brain-computer interfaces (BCIs) involve two coupled adapting systems--the human subject and the computer. In developing our BCI, our goal was to minimize the need for subject training and to impose the major learning load on the computer. To this end, we use behavioral paradigms that exploit single-trial EEG potentials preceding voluntary finger movements.(More)
While classical kernel-based learning algorithms are based on a single kernel, in practice it is often desirable to use multiple kernels. Lankriet et al. (2004) considered conic combinations of kernel matrices for classification , leading to a convex quadratically constraint quadratic program. We show that it can be rewritten as a semi-infinite linear(More)
We investigate synchronization between cardiovascular and respiratory systems in healthy humans under free-running conditions. For this aim we analyze nonstationary irregular bivariate data, namely, electrocardiograms and measurements of respiratory flow. We briefly discuss a statistical approach to synchronization in noisy and chaotic systems and(More)
Application and development of specialized machine learning techniques is gaining increasing attention in the intrusion detection community. A variety of learning techniques proposed for different intrusion detection problems can be roughly classified into two broad categories: supervised (classification) and unsupervised (anomaly detection and clustering).(More)
We introduce a new algorithm building an optimal dyadic decision tree (ODT). The method combines guaranteed performance in the learning theoretical sense and optimal search from the algorithmic point of view. Furthermore it inherits the explanatory power of tree approaches, while improving performance over classical approaches such as CART/C4.5, as shown on(More)
This paper introduces a new method using dyadic decision trees for estimating a classification or a regression function in a multi-class classification problem. The estimator is based on model selection by penalized empirical loss minimization. Our work consists in two complementary parts: first, a theoretical analysis of the method leads to deriving(More)