Stefano Melacci

Learn More
In the last few years, due to the growing ubiquity of unlabeled data, much effort has been spent by the machine learning community to develop better understanding and improve the quality of classifiers exploiting unlabeled data. Following the manifold regularization approach, Laplacian Support Vector Machines (LapSVMs) have shown the state of the art(More)
Following basic principles of information-theoretic learning, in this paper, we propose a novel approach to data clustering, referred to as minimal entropy encoding (MEE), which is based on a set of functions (features) projecting each input onto a minimum entropy configuration (code). Inspired by traditional parsimony principles, we seek solutions in(More)
This paper presents Visual ENhancement of USers (VENUS), a system able to automatically enhance male and female frontal facial images exploiting a database of celebrities as reference patterns for attractiveness. Each face is represented by a set of landmark points that can be manually selected or automatically localized using active shape models. The faces(More)
In this paper we present Similarity Neural Networks (SNNs), a neural network model able to learn a similarity measure for pairs of patterns, exploiting a binary supervision on their similarity/dissimilarity relationships. Pairwise relationships, also referred to as pairwise constraints, generally contain less information than class labels, but, in some(More)
Based on a recently proposed framework of learning from constraints using kernel-based representations, in this brief, we naturally extend its application to the case of inferences on new constraints. We give examples for polynomials and first-order logic by showing how new constraints can be checked on the basis of given premises and data samples.(More)
Supervised examples and prior knowledge on regions of the input space have been profitably integrated in kernel machines to improve the performance of classifiers in different real-world contexts. The proposed solutions, which rely on the unified supervision of points and sets, have been mostly based on specific optimization schemes in which, as usual, the(More)
The mathematical foundations of a new theory for the design of intelligent agents are presented. The proposed learning paradigm is centered around the concept of constraint, representing the interactions with the environment, and the parsimony principle. The classical regularization framework of kernel machines is naturally extended to the case in which the(More)
Supervised learning is investigated, when the data are represented not only by labeled points but also labeled regions of the input space. In the limit case, such regions degenerate to single points and the proposed approach changes back to the classical learning context. The adopted framework entails the minimization of a functional obtained by introducing(More)