Learn More
In the last few years, due to the growing ubiquity of unlabeled data, much effort has been spent by the machine learning community to develop better understanding and improve the quality of classi-fiers exploiting unlabeled data. Following the manifold regularization approach, Laplacian Support Vector Machines (LapSVMs) have shown the state of the art(More)
Following basic principles of information-theoretic learning, in this paper, we propose a novel approach to data clustering, referred to as minimal entropy encoding (MEE), which is based on a set of functions (features) projecting each input onto a minimum entropy configuration (code). Inspired by traditional parsimony principles, we seek solutions in(More)
Based on a recently proposed framework of learning from constraints using kernel-based representations, in this brief, we naturally extend its application to the case of inferences on new constraints. We give examples for polynomials and first-order logic by showing how new constraints can be checked on the basis of given premises and data samples.(More)
Supervised examples and prior knowledge on regions of the input space have been profitably integrated in kernel machines to improve the performance of classifiers in different real-world contexts. The proposed solutions, which rely on the unified supervision of points and sets, have been mostly based on specific optimization schemes in which, as usual, the(More)
The mathematical foundations of a new theory for the design of intelligent agents are presented. The proposed learning paradigm is centered around the concept of constraint, representing the interactions with the environment, and the parsimony principle. The classical regularization framework of kernel machines is naturally extended to the case in which the(More)
Keywords: Supervised learning Kernel machines Propositional rules Variational calculus Infinite-dimensional optimization Representer theorems a b s t r a c t Supervised learning is investigated, when the data are represented not only by labeled points but also labeled regions of the input space. In the limit case, such regions degenerate to single points(More)
A learning paradigm is proposed and investigated, in which the classical framework of learning from examples is enhanced by the introduction of hard pointwise constraints, i.e., constraints imposed on a finite set of examples that cannot be violated. Such constraints arise, e.g., when requiring coherent decisions of classifiers acting on different views of(More)