Testmathj: Difference between revisions
No edit summary |
No edit summary |
||
Line 1: | Line 1: | ||
** The larger the <math>\lambda</math>, more coefficients are becoming zeros (think about '''coefficient path''' plots) and thus the simpler (more '''regularized''') the model. | |||
** If <math>\lambda</math> becomes zero, it reduces to the regular regression and if <math>\lambda</math> becomes infinity, the coefficients become zeros. | |||
** In terms of the bias-variance tradeoff, the larger the <math>\lambda</math>, the higher the bias and the lower the variance of the coefficient estimators. |
Revision as of 22:42, 7 September 2019
- The larger the \(\displaystyle \lambda\), more coefficients are becoming zeros (think about coefficient path plots) and thus the simpler (more regularized) the model.
- If \(\displaystyle \lambda\) becomes zero, it reduces to the regular regression and if \(\displaystyle \lambda\) becomes infinity, the coefficients become zeros.
- In terms of the bias-variance tradeoff, the larger the \(\displaystyle \lambda\), the higher the bias and the lower the variance of the coefficient estimators.