Published: Jan. 24, 2017
Event Description:
Satyen Kale, Google Research, NYC

Online Boosting Algorithms


 

 

We initiate the study of boosting in the online setting, where the task is to convert a "weak" online learner into a "strong" online learner. The notions of weak and strong online learners directly generalize the corresponding notions from standard batch boosting. For the classification setting, we develop two online boosting algorithms. The first algorithm is an online version of boost-by-majority, and we prove that it is essentially optimal in terms of the number of weak learners and the sample complexity needed to achieve a specified accuracy. The second algorithm is adaptive and parameter-free, albeit not optimal.

 

 

 

For the regression setting, we give an online gradient boosting algorithm which converts a weak online learning algorithm for a base class of regressors into a strong online learning algorithm which works for the linear span of the base class. We also give a simpler boosting algorithm for regression that obtains a strong online learning algorithm which works for the convex hull of the base class, and prove its optimality.

 

 

Location Information:
Main Campus - Engineering Classroom Wing  (View Map)
1111 Engineering DR 
Boulder, CO 
Room: 257: Newton Lab
Contact Information:
Name: Ian Cunningham
Phone: 303-492-4668
Email: amassist@colorado.edu