Cost-sensitive Selection of Variables by Ensemble of Model Sequences

@article{Yan2021CostsensitiveSO,
  title={Cost-sensitive Selection of Variables by Ensemble of Model Sequences},
  author={Donghui Yan and Zhiwei Qin and Songxiang Gu and Haiping Xu and Ming Shao},
  journal={Knowl. Inf. Syst.},
  year={2021},
  volume={63},
  pages={1069-1092}
}
Many applications require the collection of data on different variables or measurements over many system performance metrics. We term those broadly as measures or variables. Often data collection along each measure incurs a cost, thus it is desirable to consider the cost of measures in modeling. This is a fairly new class of problems in the area of cost-sensitive learning. A few attempts have been made to incorporate costs in combining and selecting measures. However, existing studies either do… 

A Deep Neural Network Based Approach to Building Budget-Constrained Models for Big Data Analysis

TLDR
This paper introduces an approach to eliminating less important features for big data analysis using Deep Neural Networks (DNNs), and removes some input features to bring the model cost within a given budget.

References

SHOWING 1-10 OF 47 REFERENCES

The Foundations of Cost-Sensitive Learning

TLDR
It is argued that changing the balance of negative and positive training examples has little effect on the classifiers produced by standard Bayesian and decision tree learning methods, and the recommended way of applying one of these methods is to learn a classifier from the training set and then to compute optimal decisions explicitly using the probability estimates given by the classifier.

Cost-sensitive learning by cost-proportionate example weighting

TLDR
Costing is proposed, a method based on cost-proportionate rejection sampling and ensemble aggregation, which achieves excellent predictive performance on two publicly available datasets, while drastically reducing the computation required by other methods.

Least angle regression

TLDR
A publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates is described.

Experiments with a New Boosting Algorithm

TLDR
This paper describes experiments carried out to assess how well AdaBoost with and without pseudo-loss, performs on real learning problems and compared boosting to Breiman's "bagging" method when used to aggregate various classifiers.

An Introduction to Variable and Feature Selection

TLDR
The contributions of this special issue cover a wide range of aspects of variable selection: providing a better definition of the objective function, feature construction, feature ranking, multivariate feature selection, efficient search methods, and feature validity assessment methods.

Higher criticism thresholding: Optimal feature selection when useful features are rare and weak

TLDR
In the most challenging RW settings, HCT uses an unconventionally low threshold, which keeps the missed-feature detection rate under better control than FDRT and yields a classifier with improved misclassification performance.