Online Gradient Boosting

@inproceedings{Beygelzimer2015OnlineGB,
  title={Online Gradient Boosting},
  author={Alina Beygelzimer and Elad Hazan and Satyen Kale and Haipeng Luo},
  booktitle={NIPS},
  year={2015}
}
Obtain final prediction ŷt = ∑N i=1 ηŷ i t for all t . Online Gradient Boosting Input: sequence of examples (xt, yt) for t = 1, . . . , T . step-size η, number of weak learners N . Initialize σi = 0 for all i . For t = 1, 2, . . . , T : Obtain predictions of weak learners ŷ i t for all i . Set ŷt = 0. For i = 1, 2, . . . ,N : ŷt ← (1− σ)ŷt + ηŷ i t . Predict ŷt and see true label yt. Set pseudo-label ỹt = yt. For i = 1, 2, . . . ,N : Pass (xt, ỹt) to weak learner i with step-size η. Compute new… CONTINUE READING