\(
\newcommand{\P}[]{\unicode{xB6}}
\newcommand{\AA}[]{\unicode{x212B}}
\newcommand{\empty}[]{\emptyset}
\newcommand{\O}[]{\emptyset}
\newcommand{\Alpha}[]{Α}
\newcommand{\Beta}[]{Β}
\newcommand{\Epsilon}[]{Ε}
\newcommand{\Iota}[]{Ι}
\newcommand{\Kappa}[]{Κ}
\newcommand{\Rho}[]{Ρ}
\newcommand{\Tau}[]{Τ}
\newcommand{\Zeta}[]{Ζ}
\newcommand{\Mu}[]{\unicode{x039C}}
\newcommand{\Chi}[]{Χ}
\newcommand{\Eta}[]{\unicode{x0397}}
\newcommand{\Nu}[]{\unicode{x039D}}
\newcommand{\Omicron}[]{\unicode{x039F}}
\DeclareMathOperator{\sgn}{sgn}
\def\oiint{\mathop{\vcenter{\mathchoice{\huge\unicode{x222F}\,}{\unicode{x222F}}{\unicode{x222F}}{\unicode{x222F}}}\,}\nolimits}
\def\oiiint{\mathop{\vcenter{\mathchoice{\huge\unicode{x2230}\,}{\unicode{x2230}}{\unicode{x2230}}{\unicode{x2230}}}\,}\nolimits}
\)
Boosting
Reference Foundations of Machine Learning Chapter 6
Idea: Build a strong learner from a set of weak learners.
Adaboost
Learn a linear combination of our weak learners.
Given a sample of size m
for i=1:m
d_i=1/m
for t=1:T
h_t <- classifier
alpha_t <- (1/2)log((1-eps_t)/eps_t)
z_t <- e[eps_t(1-eps_t)]^(1/2)
for i=1:m
D_{t+1} <- (D_t(i)exp(-alpha_t*y_i*h_t(x_i))/z_t
g <- sum alpha_t h_t
Bagging
Bagging Predictors
Bootstrap aggregation
Idea: Given a sample S, bootstrap from the sample to get m samples S_1,...,S_m.
Then build m classifers from those samples
Your new classifier is a linear combination of those classifiers
References