# Infinite Latent Feature Selection: A Probabilistic Latent Graph-Based Ranking Approach

@article{Roffo2017InfiniteLF,
title={Infinite Latent Feature Selection: A Probabilistic Latent Graph-Based Ranking Approach},
author={Giorgio Roffo and Simone Melzi and Umberto Castellani and Alessandro Vinciarelli},
journal={2017 IEEE International Conference on Computer Vision (ICCV)},
year={2017},
pages={1407-1415}
}
• Published 24 July 2017
• Computer Science
• 2017 IEEE International Conference on Computer Vision (ICCV)
Feature selection is playing an increasingly significant role with respect to many computer vision applications spanning from object recognition to visual object tracking. However, most of the recent solutions in feature selection are not robust across different and heterogeneous set of data. In this paper, we address this issue proposing a robust probabilistic latent graph-based feature selection algorithm that performs the ranking step while considering all the possible subsets of features…
155 Citations

## Tables from this paper

Infinite Feature Selection: A Graph-based Feature Filtering Approach
• Computer Science
IEEE Transactions on Pattern Analysis and Machine Intelligence
• 2021
The results show that Inf-FS behaves better in almost any situation, that is, when the number of features to keep are fixed a priori, or when the decision of the subset cardinality is part of the process.
Joint feature and instance selection using manifold data criteria: application to image classification
• F. Dornaika
• Computer Science
Artificial Intelligence Review
• 2020
This paper targets the joint feature and instance selection by adopting feature subset relevance and sparse modeling representative selection, and evaluates the performance of the proposed schemes using image classification where classifiers are the nearest neighbor classifier and support vector machine classifier.
The Effect of Evidence Transfer on Latent Feature Relevance for Clustering
• Computer Science
Informatics
• 2019
The effects of evidence transfer on the latent representation of an autoencoder is interpreted by comparing the method to the information bottleneck method by using the relevance metric to compare the overall mutual information between the latent representations and the ground truth labels before and after their incremental manipulation.
Covariance-free Partial Least Squares: An Incremental Dimensionality Reduction Method
• Computer Science
2021 IEEE Winter Conference on Applications of Computer Vision (WACV)
• 2021
A novel incremental PLS, named Covariance-free Incremental Partial Least Squares (CIPLS), which learns a low-dimensional representation of the data using a single sample at a time, and is validated on face verification and image classification tasks, where it outperforms several other incremental dimensionality reduction techniques.
A k-Skyband Approach for Feature Selection
• Computer Science
SISAP
• 2019
This study examines the filtering problem from another perspective, in which multiple filters are aggregated according to classifiers’ constraints by relying on the concept of $$\mathcal {F}$$–dominance for weighted and monotone linear functions.
Feature and Instance Selection Through Discriminant Analysis Criteria
• Computer Science
• 2021
This paper presents three approaches for joint feature and instance selection using scores derived from discriminant analysis theory, and compares the performance of the proposed methods with several state-of-the-art methods.
Inverse Feature Learning: Feature Learning Based on Representation Learning of Error
• Computer Science
IEEE Access
• 2020
Inverse feature learning (IFL) is proposed as a novel supervised feature learning technique that learns a set of high-level features for classification based on an error representation approach to help with generalization and reduce the chance of over-fitting.
Investigating the Robustness and Stability to Noisy Data of a Dynamic Feature Selection Method
• Computer Science
2019 8th Brazilian Conference on Intelligent Systems (BRACIS)
• 2019
One of this successful models to select features considering the particularities of the data, called pareto front based dynamic feature selection (PF-DFS), is evaluated to test its stability and robustness in noisy data.

## References

SHOWING 1-10 OF 43 REFERENCES
Infinite Feature Selection
• Computer Science
2015 IEEE International Conference on Computer Vision (ICCV)
• 2015
A feature selection method exploiting the convergence properties of power series of matrices and introducing the concept of infinite feature selection (Inf-FS), which permits the investigation of the importance (relevance and redundancy) of a feature when injected into an arbitrary set of cues.
Unsupervised feature selection for multi-cluster data
• Computer Science
KDD
• 2010
Inspired from the recent developments on manifold learning and L1-regularized models for subset selection, a new approach is proposed, called Multi-Cluster Feature Selection (MCFS), for unsupervised feature selection, which select those features such that the multi-cluster structure of the data can be best preserved.
Laplacian Score for Feature Selection
• Computer Science
NIPS
• 2005
This paper proposes a "filter" method for feature selection which is independent of any learning algorithm, based on the observation that, in many real world classification problems, data from the same class are often close to each other.
Generalized Fisher Score for Feature Selection
• Computer Science
UAI
• 2011
Experiments indicate that the proposed generalized Fisher score to jointly select features outperforms Fisher score as well as many other state-of-the-art feature selection methods.
Gradient Feature Selection for Online Boosting
• Computer Science
2007 IEEE 11th International Conference on Computer Vision
• 2007
This paper proposes a gradient-based feature selection approach that iteratively updates each feature using the gradient descent, by minimizing the weighted least square error between the estimated feature response and the true label.
Feature Selection and Kernel Learning for Local Learning-Based Clustering
• Computer Science
IEEE Transactions on Pattern Analysis and Machine Intelligence
• 2011
The aim of this paper is to obtain an appropriate data representation through feature selection or kernel learning within the framework of the Local Learning-Based Clustering (LLC) method, which can outperform the global learning-based ones when dealing with the high-dimensional data lying on manifold.
Feature selection from huge feature sets
• Computer Science
Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001
• 2001
This work addresses the feature selection problem by proposing a three-step algorithm that uses a variation of the well known Relief algorithm to remove irrelevance, and which is shown to be more effective than standard feature selection algorithms for large data sets with lots of irrelevant and redundant features.
Computational Methods of Feature Selection
• Computer Science
• 2007
This book discusses Supervised, Unsupervised, and Semi-Supervised Feature Selection Key Contributions and Organization of the Book Looking Ahead Unsuper supervised Feature Selection.
Online Feature Selection for Visual Tracking
• Computer Science
BMVC
• 2016
This paper presents a collection of several modern feature selection approaches selected among filter, embedded, and wrapper methods, and shows how feature selection mechanisms can be successfully employed for ranking the features used by a tracking system, maintaining high frame rates.
Probabilistic Latent Semantic Analysis
This work proposes a widely applicable generalization of maximum likelihood model fitting by tempered EM, based on a mixture decomposition derived from a latent class model which results in a more principled approach which has a solid foundation in statistics.