• Corpus ID: 212628314

Tighter Bound Estimation of Sensitivity Analysis for Incremental and Decremental Data Modification

@article{Zhou2020TighterBE,
  title={Tighter Bound Estimation of Sensitivity Analysis for Incremental and Decremental Data Modification},
  author={Rui Zhou},
  journal={ArXiv},
  year={2020},
  volume={abs/2003.03351}
}
  • Rui Zhou
  • Published 6 March 2020
  • Computer Science
  • ArXiv
In large-scale classification problems, the data set may be faced with frequent updates, e.g., a small ratio of data is added to or removed from the original data set. In this case, incremental learning, which updates an existing classifier by explicitly modeling the data modification, is more efficient than retraining a new classifier from scratch. Conventional incremental learning algorithms try to solve the problem exactly. However, for some tasks, we are only interested in the lower and… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 30 REFERENCES
Quick Sensitivity Analysis for Incremental Data Modification and Its Application to Leave-one-out CV in Linear Classification Problems
TLDR
This paper introduces a novel sensitivity analysis framework that can quickly provide a lower and an upper bounds of a quantity on the unknown updated classifier and demonstrates that the bounds provided by the framework are often sufficiently tight for making desired inferences.
Consistency of support vector machines and other regularized kernel classifiers
  • Ingo Steinwart
  • Computer Science
    IEEE Transactions on Information Theory
  • 2005
It is shown that various classifiers that are based on minimization of a regularized risk are universally consistent, i.e., they can asymptotically learn in every classification task. The role of the
New Incremental Learning Algorithm With Support Vector Machines
TLDR
The experimental results indicate that the MR-ISVM algorithm has not only smaller misclassification rates and sparser of the obtained classifiers, but also less total time of sampling and training compared to ISVM based on randomly independent sampling.
Screening Tests for Lasso Problems
TLDR
Using a geometrically intuitive framework, this paper provides basic insights for understanding useful lasso screening tests and their limitations, and provides illustrative numerical studies on several datasets.
Incremental Learning of Random Forests for Large-Scale Image Classification
TLDR
It is shown that RFs initially trained with just 10 classes can be extended to 1,000 classes with an acceptable loss of accuracy compared to training from the full data and with great computational savings compared to retraining for each new batch of classes.
Incremental Support Vector Learning for Ordinal Regression
TLDR
Numerical experiments on the several benchmark and real-world data sets show that the incremental algorithm can converge to the optimal solution in a finite number of steps, and is faster than the existing batch and incremental SVOR algorithms.
Resting-State Whole-Brain Functional Connectivity Networks for MCI Classification Using L2-Regularized Logistic Regression
TLDR
The statistical results prove that the L2-regularized Logistic Regression method is statistically significant better than other three algorithms, which means it could be meaningful to assist physicians efficiently in “real-world” diagnostic situations.
Unsupervised learning of Dirichlet process mixture models with missing data
TLDR
This study extends a finite mixture model to the infinite case by considering Dirichlet process mixtures and compute the posterior distributions using the variational Bayesian expectation maximization algorithm, which optimizes the evidence lower bound on the complete-data log marginal likelihood.
A Safe Screening Rule for Sparse Logistic Regression
TLDR
A fast and effective sparse logistic regression screening rule (Slores) to identify the "0" components in the solution vector, which may lead to a substantial reduction in the number of features to be entered to the optimization.
Incremental and decremental training for linear classification
TLDR
This paper focuses on linear classifiers including logistic regression and linear SVM because of their simplicity over kernel or other methods and concludes that a warm start setting on a high-order optimization method for primal formulations is more suitable than others for incremental and decremental learning of linear classification.
...
1
2
3
...