Ensemble Learning: Difference between revisions
Appearance
No edit summary |
|||
| Line 20: | Line 20: | ||
==Bagging== | ==Bagging== | ||
[https://link.springer.com/article/10.1023/A:1018054314350 Bagging Predictors]<br> | |||
Bootstrap aggregation<br> | |||
Idea: Given a sample S, bootstrap from the sample to get m samples S_1,...,S_m.<br> | |||
Then build m classifers from those samples<br> | |||
Your new classifier is a linear combination of those classifiers<br> | |||
==References== | ==References== | ||
* [https://link.springer.com/article/10.1023/A:1007607513941 An Experimental Comparison of Three Methods for Constructing Ensembles of Decision Trees: Bagging, Boosting, and Randomization] | * [https://link.springer.com/article/10.1023/A:1007607513941 An Experimental Comparison of Three Methods for Constructing Ensembles of Decision Trees: Bagging, Boosting, and Randomization] | ||
Latest revision as of 15:42, 9 December 2019
Boosting
Reference Foundations of Machine Learning Chapter 6
Idea: Build a strong learner from a set of weak learners.
Adaboost
Learn a linear combination of our weak learners.
Given a sample of size m
for i=1:m
d_i=1/m
for t=1:T
h_t <- classifier
alpha_t <- (1/2)log((1-eps_t)/eps_t)
z_t <- e[eps_t(1-eps_t)]^(1/2)
for i=1:m
D_{t+1} <- (D_t(i)exp(-alpha_t*y_i*h_t(x_i))/z_t
g <- sum alpha_t h_t
Bagging
Bagging Predictors
Bootstrap aggregation
Idea: Given a sample S, bootstrap from the sample to get m samples S_1,...,S_m.
Then build m classifers from those samples
Your new classifier is a linear combination of those classifiers