Everton Alvares Cherman

Learn More
Feature selection is an important task in machine learning, which can effectively reduce the dataset dimensionality by removing irrelevant and/or redundant features. Although a large body of research deals with feature selection in single-label data, in which measures have been proposed to filter out irrelevant features, this is not the case for multi-label(More)
Traditional classification algorithms consider learning problems that contain only one label, i.e., each example is associated with one single nominal target variable characterizing its property. However, the number of practical applications involving data with multiple target variables has increased. To learn from this sort of data, multi-label(More)
The feature selection process aims to select a subset of relevant features to be used in model construction, reducing data dimensionality by removing irrelevant and redundant features. Although effective feature selection methods to support single-label learning are abound, this is not the case for multi-label learning. Furthermore, most of the multi-label(More)
Defining the attributes in terms of fuzzy sets is an essential part in designing a fuzzy system. The main tasks involved in defining the fuzzy data base include deciding the type of fuzzy set (triangular, trapezoidal, etc), the number of fuzzy sets for each attribute, and their distribution in each attribute domain. In the absence of an expert, these(More)
In supervised learning, simple baseline classifiers can be constructed by only looking at the class, i.e., ignoring any other information from the dataset. The single-label learning community frequently uses as a reference the one which always predicts the majority class. Although a classifier might perform worse than this simple baseline classifier, this(More)