As Machine Learning encompasses a large set of different algorithms, many of them (if not all) suffer from high bias or variance. Ensemble Learning aims to reduce bias and/or variance using methods such as bagging, boosting and stacking, thereby combining weak learners into stronger ones.
We first revisit the Bias-Variance Tradeoff and give a good motivation for how Ensemble Learning tries to address this. We then discuss Bootstrap Aggregating (bagging), it’s role in reducing variance and how it is implemented in Random Forests. Continuing to Boosting, we explain how it aims to tackle issues of too high bias and discuss implementations like Adaboost and Gradient Boosting. We address how model performance can be improved using Stacking, and when this generally works best. We conclude with an overview of techniques and their advantages/disadvantages.
Having learned the theory, we apply these methods in practice during a lab exercise, thereby giving more understanding about all three methods, i.e. Stacking, Bagging and Boosting.
The training includes theory, demos and hands-on exercises. After this training you have gained knowledge about:
- Combining algorithms
- Bias-Variance Trade-off
- Bagging (bootstrap aggregating)
- Majority Voting
- Random Forests
- Adaboost & Gradient Boosting