Machine Learning: Difference between revisions

No edit summary
Line 24: Line 24:
</math>.<br>
</math>.<br>
A hypothesis class <math>H</math> has uniform convergence if there exists <math>m^{UC}(\epsilon, \delta)</math> such that for every <math>\epsilon, \delta</math>, if we draw a sample <math>S</math> then with probability <math>1-\delta</math>, <math>S</math> is <math>\epsilon</math>-representative.
A hypothesis class <math>H</math> has uniform convergence if there exists <math>m^{UC}(\epsilon, \delta)</math> such that for every <math>\epsilon, \delta</math>, if we draw a sample <math>S</math> then with probability <math>1-\delta</math>, <math>S</math> is <math>\epsilon</math>-representative.
===VC dimension===
[https://en.wikipedia.org/wiki/Vapnik%E2%80%93Chervonenkis_dimension Wikipedia Page]
====Shattering====
A model <math>f</math> parameterized by <math>\theta</math> is said to shatter a set of points <math>\{x_1, ..., x_n\}</math> if there exists <math>\theta</math> such that <math>f</math> makes no errors.
====Definition====
Stolen from wikipedia:<br>
The VC dimension of a model <math>f</math> is the maximum number of points that can be arranged so that <math>f</math> shatters them.
More formally, it is the maximum cardinal <math>D</math> such that some data point set of cardinality <math>D</math> can be shattered by <math>f</math>.