Francesca Odone

Learn More
In the statistical learning framework, the use of appropriate kernels may be the key for substantial improvement in solving a given problem. In essence, a kernel is a similarity measure between input points satisfying some mathematical requirements and possibly capturing the domain knowledge. We focus on kernels for images: we represent the image(More)
In this paper we address the problem of classifying images, by exploiting global features that describe color and illumination properties, and by using the statistical learning paradigm. The contribution of this paper is twofold. First, we show that histogram intersection has the required mathematical properties to be used as a kernel function for Support(More)
Many works related learning from examples to regularization techniques for inverse problems, emphasizing the strong algorithmic and conceptual analogy of certain learning algorithms with regularization algorithms. In particular it is well known that regularization schemes such as Tikhonov regularization can be effectively used in the context of learning and(More)
The paper tackles the problem of feature points matching between pair of images of the same scene. This is a key problem in computer vision. The method we discuss here is a version of the SVD-matching proposed by Scott and LonguetHiggins and later modified by Pilu, that we elaborate in order to cope with large scale variations. To this end we add to the(More)
We discuss how a large class of regularization methods, collectively known as spectral regularization and originally designed for solving ill-posed inverse problems, gives rise to regularized learning algorithms. All of these algorithms are consistent kernel methods that can be easily implemented. The intuition behind their derivation is that the same(More)
This paper proposes a general framework for selecting features in the computer vision domain—i.e., learning descriptions from data—where the prior knowledge related to the application is confined in the early stages. The main building block is a regularization algorithm based on a penalty term enforcing sparsity. The overall strategy we propose is also(More)
: This paper presents a motion segmentation method useful for representing efficiently a video shot as a static mosaic of the background plus sequences of the objects moving in the foreground. This generates an MPEG-4 compliant, layered representation useful for video coding, editing and indexing. First, a mosaic of the static background is computed by(More)
In this paper, we propose a new trainable system for selecting face features from over-complete dictionaries of image measurements. The starting point is an iterative thresholding algorithm which provides sparse solutions to linear systems of equations. Although the proposed methodology is quite general and could be applied to various image classification(More)
Sparsity has been showed to be one of the most important properties for visual recognition purposes. In this paper we show that sparse representation plays a fundamental role in achieving one-shot learning and real-time recognition of actions. We start off from RGBD images, combine motion and appearance cues and extract state-of-the-art features in a(More)
The problem of object recognition has not yet been solved in its general form. The most successful approach to it so far relies on object models obtained by training a statistical method on visual features obtained from camera images. The images must necessarily come from huge visual datasets, in order to circumvent all problems related to changing(More)