Shrinkage Degree in $L_{2}$ -Rescale Boosting for Regression

@article{Lin2017ShrinkageDI,
  title={Shrinkage Degree in \$L\_\{2\}\$ -Rescale Boosting for Regression},
  author={Shaobo Lin and Yao Wang and Zongben Xu},
  journal={IEEE Transactions on Neural Networks and Learning Systems},
  year={2017},
  volume={28},
  pages={1851-1864}
}
  • Shaobo Lin, Yao Wang, Zongben Xu
  • Published 2017
  • Mathematics, Computer Science, Medicine
  • IEEE Transactions on Neural Networks and Learning Systems
<inline-formula> <tex-math notation="LaTeX">$L_{2}$ </tex-math></inline-formula>-rescale boosting (<inline-formula> <tex-math notation="LaTeX">$L_{2}$ </tex-math></inline-formula>-RBoosting) is a variant of <inline-formula> <tex-math notation="LaTeX">$L_{2}$ </tex-math></inline-formula>-Boosting, which can essentially improve the generalization performance of <inline-formula> <tex-math notation="LaTeX">$L_{2}$ </tex-math></inline-formula>-Boosting. The key feature of <inline-formula> <tex-math… Expand
Kernel-based L_2-Boosting with Structure Constraints
TLDR
Theoretically, it is proved that KReBooT can achieve the almost optimal numerical convergence rate for nonlinear approximation, and using the recently developed integral operator approach and a variant of Talagrand's concentration inequality, this paper provides fast learning rates for KRe BooT, which is a new record of boosting-type algorithms. Expand
Discriminative Face Recognition Methods with Structure and Label Information via l2-Norm Regularization
TLDR
Three discriminative sparse representation classification methods with structure and label information based on \(l_2\)-norm regularization for robust face recognition that can achieve better recognition results than most existing state-of-the-art sparse representation methods are proposed. Expand
Re-scale boosting for regression and classification
TLDR
A new boosting strategy is developed, called the re-scale boosting (RBoosting), to accelerate the numerical convergence rate and improve the learning performance of boosting, and shows that RBoosting outperforms boosting in terms of generalization. Expand
Generalization-error-bound-based discriminative dictionary learning
TLDR
This work proposes a novel method called GEBDDL, which explicitly incorporates the radius-margin-bound, which is directly related to the upper bound of the leave-one-out error of SVM, into its objective function to guide learning the dictionary and the coding vectors, and building the SVM classifier. Expand
Dual sparse learning via data augmentation for robust facial image classification
TLDR
The proposed method can produce a higher classification accuracy than many state-of-the-art algorithms, and it can be considered as a promising option for image-based face recognition. Expand
Adaptive discriminant analysis for semi-supervised feature selection
TLDR
Instead of computing a similarity matrix first, SADA simultaneously learns an adaptive similarity matrix S and a projection matrix W with an iterative process, and introduces the l 2, p norm to control the sparsity of S by adjusting p. Expand
Re-scale AdaBoost for attack detection in collaborative filtering recommender systems
TLDR
This paper applies a variant of AdaBoost, called the re-scale AdaBoost (RAdaBoost) as a detection method based on extracted features, which is comparable to the optimal Boosting-type algorithm and can effectively improve the performance in some hard scenarios. Expand
Random-filtering based sparse representation parallel face recognition
TLDR
A novel two-phase representation based FR approach, called random-filtering based sparse representation (RFSR) scheme, which can improve the FR accuracy just using a simple way to obtain more training samples, along with a higher time efficiency. Expand
A fast rank mutual information based decision tree and its implementation via Map‐Reduce
TLDR
The experimental analysis on other 6 data sets shows that the proposed MR‐FRMIDT is feasible and has a good parallel performance on reducing execution time and avoiding memory restrictions, and the comparison with 7 different popular splitting measures based monotonic decision trees on several data sets illustrates the effectiveness of FRMIDs in monotony classification. Expand
Sparse Representation Feature for Facial Expression Recognition
TLDR
Experimental results show that the sparse representation feature is suitable for facial expression recognition. Expand

References

SHOWING 1-10 OF 60 REFERENCES
Characterizing $L_{2}$Boosting
We consider $L_2$Boosting, a special case of Friedman's generic boosting algorithm applied to linear regression under $L_2$-loss. We study $L_2$Boosting for an arbitrary regularization parameter andExpand
An $L_{2}$ -Boosting Algorithm for Estimation of a Regression Function
An L 2-boosting algorithm for estimation of a regression function from random design is presented, which consists of fitting repeatedly a function from a fixed nonlinear function space to theExpand
Boosting With the L2 Loss
This article investigates a computationally simple variant of boosting, L2Boost, which is constructed from a functional gradient descent algorithm with the L2-loss function. Like other boostingExpand
Concentration estimates for learning with ℓ1-regularizer and data dependent hypothesis spaces
We consider the regression problem by learning with a regularization scheme in a data dependent hypothesis space and l1-regularizer. The data dependence nature of the kernel-based hypothesis spaceExpand
Special Invited Paper-Additive logistic regression: A statistical view of boosting
Boosting is one of the most important recent developments in classification methodology. Boosting works by sequentially applying a classification algorithm to reweighted versions of the training dataExpand
Boosting with early stopping: Convergence and consistency
Boosting is one of the most significant advances in machine learning for classification and regression. In its original and computationally flexible version, boosting seeks to minimize empirically aExpand
Stagewise Lasso
TLDR
The BLasso algorithm is proposed that ties the FSF (e-Boosting) algorithm with the Lasso method that minimizes the L1 penalized L2 loss and provides a class of simple and easy-to-implement algorithms for tracing the regularization or solution paths of penalized minimization problems. Expand
Re-scale boosting for regression and classification
TLDR
A new boosting strategy is developed, called the re-scale boosting (RBoosting), to accelerate the numerical convergence rate and improve the learning performance of boosting, and shows that RBoosting outperforms boosting in terms of generalization. Expand
Experiments with a New Boosting Algorithm
TLDR
This paper describes experiments carried out to assess how well AdaBoost with and without pseudo-loss, performs on real learning problems and compared boosting to Breiman's "bagging" method when used to aggregate various classifiers. Expand
Prediction-based Termination Rule for Greedy Learning with Massive Data.
TLDR
This paper proposes a new termination rule for OGA via investigating its predictive performance and shows that the proposed method is strongly consistent with an [Formula: see text] convergence rate to the oracle prediction. Expand
...
1
2
3
4
5
...