Machine Learning: Difference between revisions
No edit summary |
|||
Line 13: | Line 13: | ||
===Learning Rate=== | ===Learning Rate=== | ||
==Learning Theory== | |||
===PAC Learning=== | |||
Probably Approximately Correct (PAC)<br> | |||
A hypothesis class <math>H</math> is PAC learnable if given <math>0 < \epsilon, \delta < 1</math>, there is some function <math>m(\epsilon, \delta)</math> polynomial in <math>1/\epsilon, 1/\delta</math> such that if we have a sample size <math>\geq m(\epsilon, \delta)</math> then with probability <math>1-\delta</math> the hypothesis we will learn will have error less than <math>\epsilon</math>. |
Revision as of 16:58, 31 October 2019
Machine Learning Interesting
Hyperparameters
Batch Size
A medium post empirically evaluating the effect of batch_size
Learning Rate
Learning Theory
PAC Learning
Probably Approximately Correct (PAC)
A hypothesis class \(\displaystyle H\) is PAC learnable if given \(\displaystyle 0 \lt \epsilon, \delta \lt 1\), there is some function \(\displaystyle m(\epsilon, \delta)\) polynomial in \(\displaystyle 1/\epsilon, 1/\delta\) such that if we have a sample size \(\displaystyle \geq m(\epsilon, \delta)\) then with probability \(\displaystyle 1-\delta\) the hypothesis we will learn will have error less than \(\displaystyle \epsilon\).