5,337
edits
Line 20: | Line 20: | ||
===Uniform Convergence=== | ===Uniform Convergence=== | ||
If for all hypothesis <math>h</math>, <math>|L_S(h)-L_D(h)| \leq \epsilon</math>, then the training set <math>S</math> is called <math>\epsilon</math>-representative. | If for all hypothesis <math>h</math>, <math>|L_S(h)-L_D(h)| \leq \epsilon</math>, then the training set <math>S</math> is called <math>\epsilon</math>-representative.<br> | ||
Then | |||
<math> | |||
L_D(h_s) | |||
\leq L_S(h_S) + \epsilon / 2 | |||
\leq L_S(h_D) + \epsilon / 2 | |||
\leq L_D(h_D) + \epsilon | |||
</math>.<br> | |||
A hypothesis class <math>H</math> has uniform convergence if there exists <math>m^{UC}(\epsilon, \delta)</math> such that for every <math>\epsilon, \delta</math>, if we draw a sample <math>S</math> then with probability <math>1-\delta</math>, <math>S</math> is <math>\epsilon</math>-representative. |