Testmathj: Difference between revisions

From David's Wiki
No edit summary
No edit summary
Line 1: Line 1:
=== Lasso/glmnet, adaptive lasso and FAQs ===
** The larger the <math>\lambda</math>, more coefficients are becoming zeros (think about '''coefficient path''' plots) and thus the simpler (more '''regularized''') the model.
** Strongly correlated covariates have similar regression coefficients, is referred to as the '''grouping''' effect. From the wikipedia page ''"one would like to find all the associated covariates, rather than selecting only one from each set of strongly correlated covariates, as lasso often does. In addition, selecting only a single covariate from each group will typically result in increased prediction error, since the model is less robust (which is why ridge regression often outperforms lasso)"''.
** If <math>\lambda</math> becomes zero, it reduces to the regular regression and if <math>\lambda</math> becomes infinity, the coefficients become zeros.
** In terms of the bias-variance tradeoff, the larger the <math>\lambda</math>, the higher the bias and the lower the variance of the coefficient estimators.

Revision as of 22:42, 7 September 2019

    • The larger the \(\displaystyle \lambda\), more coefficients are becoming zeros (think about coefficient path plots) and thus the simpler (more regularized) the model.
    • If \(\displaystyle \lambda\) becomes zero, it reduces to the regular regression and if \(\displaystyle \lambda\) becomes infinity, the coefficients become zeros.
    • In terms of the bias-variance tradeoff, the larger the \(\displaystyle \lambda\), the higher the bias and the lower the variance of the coefficient estimators.