|
|
(15 intermediate revisions by the same user not shown) |
Line 1: |
Line 1: |
| ** The larger the <math>\lambda</math>, more coefficients are becoming zeros (think about '''coefficient path''' plots) and thus the simpler (more '''regularized''') the model.
| | |
| ** If <math>\lambda</math> becomes zero, it reduces to the regular regression and if <math>\lambda</math> becomes infinity, the coefficients become zeros.
| | <math>\| \frac{c}{d} \|</math> |
| ** In terms of the bias-variance tradeoff, the larger the <math>\lambda</math>, the higher the bias and the lower the variance of the coefficient estimators.
| |
Latest revision as of 23:07, 7 September 2019
\(\displaystyle \| \frac{c}{d} \|\)