Learn More
Hierarchical clustering is a stepwise clustering method usually based on proximity measures between objects or sets of objects from a given data set. The most common proximity measures are distance measures. The derived proximity matrices can be used to build graphs, which provide the basic structure for some clustering methods. We present here a new(More)
Objective—To document clinical and magnetic resonance imaging (MRI) characteristics of a large cohort of primary and transitional progressive multiple sclerosis (PP and TP MS) patients over one year. Introduction—Patients with PP or TP MS have been shown to have low brain T2 and T1 lesion loads and slow rates of new lesion formation with minimal gadolin-ium(More)
The last years have witnessed an increasing attention to entropy-based criteria in adaptive systems. Several principles were proposed based on the maximization or minimization of entropic cost functions. We propose a new type of neural network classifiers with multilayer perceptron (MLP) architecture, but where the usual mean square error minimization(More)
Transfer Learning is a paradigm in machine learning to solve a target problem by reusing the learning with minor modifications from a different but related source problem. In this paper we propose a novel feature transference approach, especially when the source and the target problems are drawn from different distributions. We use deep neural networks to(More)
The learning process of a multilayer perceptron requires the optimization of an error function E(y,t) comparing the predicted output, y, and the observed target, t. We review some usual error functions, analyze their mathematical properties for data classification purposes, and introduce a new one, E(Exp), inspired by the Z-EDM algorithm that we have(More)
Entropy-based cost functions are enjoying a growing attractiveness in unsupervised and supervised classification tasks. Better performances in terms both of error rate and speed of convergence have been reported. In this letter, we study the principle of error entropy minimization (EEM) from a theoretical point of view. We use Shannon's entropy and study(More)
The use of monolithic neural networks (such as a multilayer perceptron) has some drawbacks: e.g. slow learning, weight coupling, the black box effect. These can be alleviated by the use of a modular neural network. The creation of a MNN has three steps: task decomposition, module creation and decision integration. In this paper we propose the use of an(More)
—Transfer learning is a process that allows reusing a learning machine trained on a problem to solve a new problem. Transfer learning studies on shallow architectures show low performance as they are generally based on hand-crafted features obtained from experts. It is therefore interesting to study transference on deep architectures, known to directly(More)
Deep neural networks comprise several hidden layers of units, which can be pre-trained one at a time via an unsupervised greedy approach. A whole network can then be trained (fine-tuned) in a supervised fashion. One possible pre-training strategy is to regard each hidden layer in the network as the input layer of an auto-encoder. Since auto-encoders aim to(More)
This letter focuses on the issue of whether risk functionals derived from information-theoretic principles, such as Shannon or Rényi's entropies, are able to cope with the data classification problem in both the sense of attaining the risk functional minimum and implying the minimum probability of error allowed by the family of functions implemented by the(More)