Roberto D'Ambrosio

  • Citations Per Year
Learn More
UNLABELLED Interest is rising for auto-immune contribution in neuro-psychiatry. We evaluated the auto-antibodies against dopamine transporter (DAT aAbs) in 61 children (46 ADHD who met DSM-IV-TR criteria, 15 healthy controls). METHODS ADHD patients were assigned, according to severity, either to a non-pharmacological therapy (NPT, N=32) or to a(More)
It is an admitted fact that mainstream boosting algorithms like AdaBoost do not perform well to estimate class conditional probabilities. In this paper, we analyze, in the light of this problem, a recent algorithm, unn, which leverages nearest neighbors while minimizing a convex loss. Our contribution is threefold. First, we show that there exists a(More)
Research in automatic facial expression recognition has permitted the development of systems discriminating between the six prototypical expressions, i.e. anger, disgust, fear, happiness, sadness and surprise, in frontal video sequences. Achieving high recognition rate often implies high computational costs that are not compatible with real time(More)
Universal Nearest Neighbours (unn) is a classifier recently proposed, which can also effectively estimates the posterior probability of each classification act. This algorithm, intrinsically binary, requires the use of a decomposition method to cope with multiclass problems, thus reducing their complexity in less complex binary subtasks. Then, a(More)
Efficient Bigdata classification requires low cost learning methods. A standard approach involves Stochastic Gradient Descent algorithm (SGD) for the minimization of the Hinge Loss in the primal space. Although complexity of Stochastic Gradient Descent is linear with the number of samples these method suffers from slow convergence. In order to cope with(More)
This paper introduces two feature selection methods to deal with heterogeneous data that include continuous and categorical variables. We propose to plug a dedicated kernel that handles both kinds of variables into a Recursive Feature Elimination procedure using either a non-linear SVM or Multiple Kernel Learning. These methods are shown to offer(More)
Class imbalance limits the performance of most learning algorithms since they cannot cope with large differences between the number of samples in each class, resulting in a low predictive accuracy over the minority ones. Several algorithms achieving more balanced performance in case of binary learning have been proposed, while few researches exists in case(More)
Tailoring nearest neighbors algorithms to boosting is an important problem. Recent papers study an approach, UNN, which provably minimizes particular convex surrogates under weak assumptions. However, numerical issues make it necessary to experimentally tweak parts of the UNN algorithm, at the possible expense of the algorithm's convergence and performance.(More)