|
|
Line 1: |
Line 1: |
| = Statisticians =
| |
| * [https://en.wikipedia.org/wiki/Karl_Pearson Karl Pearson] (1857-1936): chi-square, p-value, PCA
| |
| * [https://en.wikipedia.org/wiki/William_Sealy_Gosset William Sealy Gosset] (1876-1937): Student's t
| |
| * [https://en.wikipedia.org/wiki/Ronald_Fisher Ronald Fisher] (1890-1962): ANOVA
| |
| * [https://en.wikipedia.org/wiki/Egon_Pearson Egon Pearson] (1895-1980): son of Karl Pearson
| |
| * [https://en.wikipedia.org/wiki/Jerzy_Neyman Jerzy Neyman] (1894-1981): type 1 error
| |
|
| |
|
| = Statistics for biologists =
| |
| http://www.nature.com/collections/qghhqm
| |
|
| |
| = Transform sample values to their percentiles =
| |
| https://stackoverflow.com/questions/21219447/calculating-percentile-of-dataset-column
| |
| <syntaxhighlight lang='bash'>
| |
| set.seed(1234)
| |
| x <- rnorm(10)
| |
| x
| |
| # [1] -1.2070657 0.2774292 1.0844412 -2.3456977 0.4291247 0.5060559
| |
| # [7] -0.5747400 -0.5466319 -0.5644520 -0.8900378
| |
| ecdf(x)(x)
| |
| # [1] 0.2 0.7 1.0 0.1 0.8 0.9 0.4 0.6 0.5 0.3
| |
|
| |
| rank(x)
| |
| # [1] 2 7 10 1 8 9 4 6 5 3
| |
| </syntaxhighlight>
| |
|
| |
| = Box(Box and whisker) plot in R =
| |
| See
| |
| * https://en.wikipedia.org/wiki/Box_plot
| |
| * https://owi.usgs.gov/blog/boxplots/ (ggplot2 is used)
| |
| * https://flowingdata.com/2008/02/15/how-to-read-and-use-a-box-and-whisker-plot/
| |
| * [https://en.wikipedia.org/wiki/Quartile Quartile] from Wikipedia. The quartiles returned from R are the same as the method defined by Method 2 described in Wikipedia.
| |
|
| |
| An example for a graphical explanation.
| |
| <syntaxhighlight lang='rsplus'>
| |
| > x=c(0,4,15, 1, 6, 3, 20, 5, 8, 1, 3)
| |
| > summary(x)
| |
| Min. 1st Qu. Median Mean 3rd Qu. Max.
| |
| 0 2 4 6 7 20
| |
| > sort(x)
| |
| [1] 0 1 1 3 3 4 5 6 8 15 20
| |
| > boxplot(x, col = 'grey')
| |
|
| |
| # https://en.wikipedia.org/wiki/Quartile#Example_1
| |
| > summary(c(6, 7, 15, 36, 39, 40, 41, 42, 43, 47, 49))
| |
| Min. 1st Qu. Median Mean 3rd Qu. Max.
| |
| 6.00 25.50 40.00 33.18 42.50 49.00
| |
| </syntaxhighlight>
| |
| [[:File:Boxplot.svg]]
| |
|
| |
| * The lower and upper edges of box is determined by the first and 3rd '''quartiles''' (2 and 7 in the above example).
| |
| ** 2 = median(c(0, 1, 1, 3, 3, 4)) = (1+3)/2
| |
| ** 7 = median(c(4, 5, 6, 8, 15, 20)) = (6+8)/2
| |
| ** IQR = 7 - 2 = 5
| |
| * The thick dark horizon line is the '''median''' (4 in the example).
| |
| * '''Outliers''' are defined by (the empty circles in the plot)
| |
| ** Observations larger than 3rd quartile + 1.5 * IQR (7+1.5*5=14.5) and
| |
| ** smaller than 1st quartile - 1.5 * IQR (2-1.5*5=-5.5).
| |
| ** Note that ''the cutoffs are not shown in the Box plot''.
| |
| * Whisker (defined using the cutoffs used to define outliers)
| |
| ** '''Upper whisker''' is defined by '''the largest "data" below 3rd quartile + 1.5 * IQR''' (8 in this example), and
| |
| ** '''Lower whisker''' is defined by '''the smallest "data" greater than 1st quartile - 1.5 * IQR''' (0 in this example).
| |
| ** See another example below where we can see the whiskers fall on observations.
| |
|
| |
| Note the [http://en.wikipedia.org/wiki/Box_plot wikipedia] lists several possible definitions of a whisker. R uses the 2nd method (Tukey boxplot) to define whiskers.
| |
|
| |
| == Create boxplots from a list object ==
| |
| Normally we use a vector to create a single boxplot or a formula on a data to create boxplots.
| |
|
| |
| But we can also use [https://www.rdocumentation.org/packages/base/versions/3.5.1/topics/split split()] to create a list and then make boxplots.
| |
|
| |
| == Dot-box plot ==
| |
| * http://civilstat.com/2012/09/the-grammar-of-graphics-notes-on-first-reading/
| |
| * http://www.r-graph-gallery.com/89-box-and-scatter-plot-with-ggplot2/
| |
| * http://www.sthda.com/english/wiki/ggplot2-box-plot-quick-start-guide-r-software-and-data-visualization
| |
| * [https://designdatadecisions.wordpress.com/2015/06/09/graphs-in-r-overlaying-data-summaries-in-dotplots/ Graphs in R – Overlaying Data Summaries in Dotplots]. Note that for some reason, the boxplot will cover the dots when we save the plot to an svg or a png file. So an alternative solution is to change the order <syntaxhighlight lang='rsplus'>
| |
| par(cex.main=0.9,cex.lab=0.8,font.lab=2,cex.axis=0.8,font.axis=2,col.axis="grey50")
| |
| boxplot(weight ~ feed, data = chickwts, range=0, whisklty = 0, staplelty = 0)
| |
| par(new = TRUE)
| |
| stripchart(weight ~ feed, data = chickwts, xlim=c(0.5,6.5), vertical=TRUE, method="stack", offset=0.8, pch=19,
| |
| main = "Chicken weights after six weeks", xlab = "Feed Type", ylab = "Weight (g)")
| |
| </syntaxhighlight>
| |
|
| |
| [[:File:Boxdot.svg]]
| |
|
| |
| == Other boxplots ==
| |
|
| |
| [[:File:Lotsboxplot.png]]
| |
|
| |
| = stem and leaf plot =
| |
| [https://stat.ethz.ch/R-manual/R-devel/library/graphics/html/stem.html stem()]. See [http://www.r-tutor.com/elementary-statistics/quantitative-data/stem-and-leaf-plot R Tutorial].
| |
|
| |
| Note that stem plot is useful when there are outliers.
| |
| <syntaxhighlight lang='rsplus'>
| |
| > stem(x)
| |
|
| |
| The decimal point is 10 digit(s) to the right of the |
| |
|
| |
| 0 | 00000000000000000000000000000000000000000000000000000000000000000000+419
| |
| 1 |
| |
| 2 |
| |
| 3 |
| |
| 4 |
| |
| 5 |
| |
| 6 |
| |
| 7 |
| |
| 8 |
| |
| 9 |
| |
| 10 |
| |
| 11 |
| |
| 12 | 9
| |
|
| |
| > max(x)
| |
| [1] 129243100275
| |
| > max(x)/1e10
| |
| [1] 12.92431
| |
|
| |
| > stem(y)
| |
|
| |
| The decimal point is at the |
| |
|
| |
| 0 | 014478
| |
| 1 | 0
| |
| 2 | 1
| |
| 3 | 9
| |
| 4 | 8
| |
|
| |
| > y
| |
| [1] 3.8667356428 0.0001762708 0.7993462430 0.4181079732 0.9541728562
| |
| [6] 4.7791262101 0.6899313108 2.1381289177 0.0541736818 0.3868776083
| |
|
| |
| > set.seed(1234)
| |
| > z <- rnorm(10)*10
| |
| > z
| |
| [1] -12.070657 2.774292 10.844412 -23.456977 4.291247 5.060559
| |
| [7] -5.747400 -5.466319 -5.644520 -8.900378
| |
| > stem(z)
| |
|
| |
| The decimal point is 1 digit(s) to the right of the |
| |
|
| |
| -2 | 3
| |
| -1 | 2
| |
| -0 | 9665
| |
| 0 | 345
| |
| 1 | 1
| |
| </syntaxhighlight>
| |
|
| |
| = Box-Cox transformation =
| |
| * [https://en.wikipedia.org/wiki/Power_transform#Box%E2%80%93Cox_transformation Power transformation]
| |
| * [http://denishaine.wordpress.com/2013/03/11/veterinary-epidemiologic-research-linear-regression-part-3-box-cox-and-matrix-representation/ Finding transformation for normal distribution]
| |
|
| |
| = the Holy Trinity (LRT, Wald, Score tests) =
| |
| * https://en.wikipedia.org/wiki/Likelihood_function which includes '''profile likelihood''' and '''partial likelihood'''
| |
| * [http://data.princeton.edu/wws509/notes/a1.pdf Review of the likelihood theory]
| |
| * [http://www.tandfonline.com/doi/full/10.1080/00031305.2014.955212#abstract?ai=rv&mi=3be122&af=R The “Three Plus One” Likelihood-Based Test Statistics: Unified Geometrical and Graphical Interpretations]
| |
| * [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5969114/ Variable selection – A review and recommendations for the practicing statistician] by Heinze et al 2018.
| |
| ** Score test is step-up. Score test is typically used in forward steps to screen covariates currently not included in a model for their ability to improve model.
| |
| ** Wald test is step-down. Wald test starts at the full model. It evaluate the significance of a variable by comparing the ratio of its estimate and its standard error with an appropriate t distribution (for linear models) or standard normal distribution (for logistic or Cox regression).
| |
| ** Likelihood ratio tests provide the best control over nuisance parameters by maximizing the likelihood over them both in H0 model and H1 model. In particular, if several coefficients are being tested simultaneously, LRTs for model comparison are preferred over Wald or score tests.
| |
| * R packages
| |
| ** lmtest package, [https://www.rdocumentation.org/packages/lmtest/versions/0.9-37/topics/waldtest waldtest()] and [https://www.rdocumentation.org/packages/lmtest/versions/0.9-37/topics/lrtest lrtest()].
| |
|
| |
| = Don't invert that matrix =
| |
| * http://www.johndcook.com/blog/2010/01/19/dont-invert-that-matrix/
| |
| * http://civilstat.com/2015/07/dont-invert-that-matrix-why-and-how/
| |
|
| |
| == Different matrix decompositions/factorizations ==
| |
|
| |
| * [https://en.wikipedia.org/wiki/QR_decomposition QR decomposition], [https://www.rdocumentation.org/packages/base/versions/3.5.1/topics/qr qr()]
| |
| * [https://en.wikipedia.org/wiki/LU_decomposition LU decomposition], [https://www.rdocumentation.org/packages/Matrix/versions/1.2-14/topics/lu lu()] from the 'Matrix' package
| |
| * [https://en.wikipedia.org/wiki/Cholesky_decomposition Cholesky decomposition], [https://www.rdocumentation.org/packages/base/versions/3.5.1/topics/chol chol()]
| |
| * [https://en.wikipedia.org/wiki/Singular-value_decomposition Singular value decomposition], [https://www.rdocumentation.org/packages/base/versions/3.5.1/topics/svd svd()]
| |
|
| |
| <syntaxhighlight lang='rsplus'>
| |
| set.seed(1234)
| |
| x <- matrix(rnorm(10*2), nr= 10)
| |
| cmat <- cov(x); cmat
| |
| # [,1] [,2]
| |
| # [1,] 0.9915928 -0.1862983
| |
| # [2,] -0.1862983 1.1392095
| |
|
| |
| # cholesky decom
| |
| d1 <- chol(cmat)
| |
| t(d1) %*% d1 # equal to cmat
| |
| d1 # upper triangle
| |
| # [,1] [,2]
| |
| # [1,] 0.9957875 -0.1870864
| |
| # [2,] 0.0000000 1.0508131
| |
|
| |
| # svd
| |
| d2 <- svd(cmat)
| |
| d2$u %*% diag(d2$d) %*% t(d2$v) # equal to cmat
| |
| d2$u %*% diag(sqrt(d2$d))
| |
| # [,1] [,2]
| |
| # [1,] -0.6322816 0.7692937
| |
| # [2,] 0.9305953 0.5226872
| |
| </syntaxhighlight>
| |
|
| |
| = Linear Regression =
| |
| [https://leanpub.com/regmods Regression Models for Data Science in R] by Brian Caffo
| |
|
| |
| Comic https://xkcd.com/1725/
| |
|
| |
| == Different models (in R) ==
| |
| http://www.quantide.com/raccoon-ch-1-introduction-to-linear-models-with-r/
| |
|
| |
| == dummy.coef.lm() in R ==
| |
| Extracts coefficients in terms of the original levels of the coefficients rather than the coded variables.
| |
|
| |
| == model.matrix, design matrix ==
| |
| [https://github.com/csoneson/ExploreModelMatrix ExploreModelMatrix]: Explore design matrices interactively with R/Shiny
| |
|
| |
| == Contrasts in linear regression ==
| |
| * Page 147 of Modern Applied Statistics with S (4th ed)
| |
| * https://biologyforfun.wordpress.com/2015/01/13/using-and-interpreting-different-contrasts-in-linear-models-in-r/ This explains the meanings of 'treatment', 'helmert' and 'sum' contrasts.
| |
| * [http://rstudio-pubs-static.s3.amazonaws.com/65059_586f394d8eb84f84b1baaf56ffb6b47f.html A (sort of) Complete Guide to Contrasts in R] by Rose Maier <syntaxhighlight lang='rsplus'>
| |
| mat
| |
|
| |
| ## constant NLvMH NvL MvH
| |
| ## [1,] 1 -0.5 0.5 0.0
| |
| ## [2,] 1 -0.5 -0.5 0.0
| |
| ## [3,] 1 0.5 0.0 0.5
| |
| ## [4,] 1 0.5 0.0 -0.5
| |
| mat <- mat[ , -1]
| |
|
| |
| model7 <- lm(y ~ dose, data=data, contrasts=list(dose=mat) )
| |
| summary(model7)
| |
|
| |
| ## Coefficients:
| |
| ## Estimate Std. Error t value Pr(>|t|)
| |
| ## (Intercept) 118.578 1.076 110.187 < 2e-16 ***
| |
| ## doseNLvMH 3.179 2.152 1.477 0.14215
| |
| ## doseNvL -8.723 3.044 -2.866 0.00489 **
| |
| ## doseMvH 13.232 3.044 4.347 2.84e-05 ***
| |
|
| |
| # double check your contrasts
| |
| attributes(model7$qr$qr)$contrasts
| |
| ## $dose
| |
| ## NLvMH NvL MvH
| |
| ## None -0.5 0.5 0.0
| |
| ## Low -0.5 -0.5 0.0
| |
| ## Med 0.5 0.0 0.5
| |
| ## High 0.5 0.0 -0.5
| |
|
| |
| library(dplyr)
| |
| dose.means <- summarize(group_by(data, dose), y.mean=mean(y))
| |
| dose.means
| |
| ## Source: local data frame [4 x 2]
| |
| ##
| |
| ## dose y.mean
| |
| ## 1 None 112.6267
| |
| ## 2 Low 121.3500
| |
| ## 3 Med 126.7839
| |
| ## 4 High 113.5517
| |
|
| |
| # The coefficient estimate for the first contrast (3.18) equals the average of
| |
| # the last two groups (126.78 + 113.55 /2 = 120.17) minus the average of
| |
| # the first two groups (112.63 + 121.35 /2 = 116.99).
| |
| </syntaxhighlight>
| |
|
| |
| == Multicollinearity ==
| |
| * [https://datascienceplus.com/multicollinearity-in-r/ Multicollinearity in R]
| |
| * [https://www.rdocumentation.org/packages/stats/versions/3.5.1/topics/alias alias]: Find Aliases (Dependencies) In A Model
| |
| <syntaxhighlight lang='rsplus'>
| |
| > op <- options(contrasts = c("contr.helmert", "contr.poly"))
| |
| > npk.aov <- aov(yield ~ block + N*P*K, npk)
| |
| > alias(npk.aov)
| |
| Model :
| |
| yield ~ block + N * P * K
| |
|
| |
| Complete :
| |
| (Intercept) block1 block2 block3 block4 block5 N1 P1 K1 N1:P1 N1:K1 P1:K1
| |
| N1:P1:K1 0 1 1/3 1/6 -3/10 -1/5 0 0 0 0 0 0
| |
|
| |
| > options(op)
| |
| </syntaxhighlight>
| |
|
| |
| == Exposure ==
| |
| https://en.mimi.hu/mathematics/exposure_variable.html
| |
|
| |
| Independent variable = predictor = explanatory = exposure variable
| |
|
| |
| == Confounders, confounding ==
| |
| * https://en.wikipedia.org/wiki/Confounding
| |
| ** [https://academic.oup.com/jamia/article/21/2/308/723853 A method for controlling complex confounding effects in the detection of adverse drug reactions using electronic health records]. It provides a rule to identify a confounder.
| |
| * http://anythingbutrbitrary.blogspot.com/2016/01/how-to-create-confounders-with.html (R example)
| |
| * [http://www.cantab.net/users/filimon/cursoFCDEF/will/logistic_confound.pdf Logistic Regression: Confounding and Colinearity]
| |
| * [https://stats.stackexchange.com/questions/192591/identifying-a-confounder?rq=1 Identifying a confounder]
| |
| * [https://stats.stackexchange.com/questions/38326/is-it-possible-to-have-a-variable-that-acts-as-both-an-effect-modifier-and-a-con Is it possible to have a variable that acts as both an effect modifier and a confounder?]
| |
| * [https://stats.stackexchange.com/questions/34644/which-test-to-use-to-check-if-a-possible-confounder-impacts-a-0-1-result Which test to use to check if a possible confounder impacts a 0 / 1 result?]
| |
| * [https://genomebiology.biomedcentral.com/articles/10.1186/s13059-019-1700-9 Addressing confounding artifacts in reconstruction of gene co-expression networks] Parsana 2019
| |
|
| |
| == Causal inference ==
| |
| * https://en.wikipedia.org/wiki/Causal_inference
| |
| * [http://www.rebeccabarter.com/blog/2017-07-05-confounding/ Confounding in causal inference: what is it, and what to do about it?]
| |
|
| |
| == Confidence interval vs prediction interval ==
| |
| Confidence intervals tell you about how well you have determined the mean E(Y). Prediction intervals tell you where you can expect to see the next data point sampled. That is, CI is computed using Var(E(Y|X)) and PI is computed using Var(E(Y|X) + e).
| |
|
| |
| * http://www.graphpad.com/support/faqid/1506/
| |
| * http://en.wikipedia.org/wiki/Prediction_interval
| |
| * http://robjhyndman.com/hyndsight/intervals/
| |
| * https://stat.duke.edu/courses/Fall13/sta101/slides/unit7lec3H.pdf
| |
| * https://datascienceplus.com/prediction-interval-the-wider-sister-of-confidence-interval/
| |
|
| |
| == Heteroskedasticity ==
| |
| [http://www.brodrigues.co/blog/2018-07-08-rob_stderr/ Dealing with heteroskedasticity; regression with robust standard errors using R]
| |
|
| |
| == Linear regression with Map Reduce ==
| |
| https://freakonometrics.hypotheses.org/53269
| |
|
| |
| == Relationship between multiple variables ==
| |
| [https://statisticaloddsandends.wordpress.com/2019/08/24/visualizing-the-relationship-between-multiple-variables/ Visualizing the relationship between multiple variables]
| |
|
| |
| = Quantile regression =
| |
| * https://en.wikipedia.org/wiki/Quantile_regression
| |
| * [https://insightr.wordpress.com/2019/08/13/basic-quantile-regression/ Basic Quantile Regression]
| |
|
| |
| = Non- and semi-parametric regression =
| |
| * [https://mathewanalytics.com/2018/03/05/semiparametric-regression-in-r/ Semiparametric Regression in R]
| |
| * https://socialsciences.mcmaster.ca/jfox/Courses/Oxford-2005/R-nonparametric-regression.html
| |
|
| |
| == Mean squared error ==
| |
| * [https://www.statworx.com/de/blog/simulating-the-bias-variance-tradeoff-in-r/ Simulating the bias-variance tradeoff in R]
| |
| * [https://alemorales.info/post/variance-estimators/ Estimating variance: should I use n or n - 1? The answer is not what you think]
| |
|
| |
| == Splines ==
| |
| * https://en.wikipedia.org/wiki/B-spline
| |
| * [https://www.r-bloggers.com/cubic-and-smoothing-splines-in-r/ Cubic and Smoothing Splines in R]. '''bs()''' is for cubic spline and '''smooth.spline()''' is for smoothing spline.
| |
| * [https://www.rdatagen.net/post/generating-non-linear-data-using-b-splines/ Can we use B-splines to generate non-linear data?]
| |
| * [https://stats.stackexchange.com/questions/29400/spline-fitting-in-r-how-to-force-passing-two-data-points How to force passing two data points?] ([https://cran.r-project.org/web/packages/cobs/index.html cobs] package)
| |
| * https://www.rdocumentation.org/packages/cobs/versions/1.3-3/topics/cobs
| |
|
| |
| == k-Nearest neighbor regression ==
| |
| * k-NN regression in practice: boundary problem, discontinuities problem.
| |
| * Weighted k-NN regression: want weight to be small when distance is large. Common choices - weight = kernel(xi, x)
| |
|
| |
| == Kernel regression ==
| |
| * Instead of weighting NN, weight ALL points. Nadaraya-Watson kernel weighted average:
| |
| <math>\hat{y}_q = \sum c_{qi} y_i/\sum c_{qi} = \frac{\sum \text{Kernel}_\lambda(\text{distance}(x_i, x_q))*y_i}{\sum \text{Kernel}_\lambda(\text{distance}(x_i, x_q))} </math>.
| |
| * Choice of bandwidth <math>\lambda</math> for bias, variance trade-off. Small <math>\lambda</math> is over-fitting. Large <math>\lambda</math> can get an over-smoothed fit. '''Cross-validation'''.
| |
| * Kernel regression leads to locally constant fit.
| |
| * Issues with high dimensions, data scarcity and computational complexity.
| |
|
| |
| = Principal component analysis =
| |
| See [[PCA|PCA]].
| |
|
| |
| = Partial Least Squares (PLS) =
| |
| * https://en.wikipedia.org/wiki/Partial_least_squares_regression. The general underlying model of multivariate PLS is
| |
| :<math>X = T P^\mathrm{T} + E</math>
| |
| :<math>Y = U Q^\mathrm{T} + F</math>
| |
| where {{mvar|X}} is an <math>n \times m</math> matrix of predictors, {{mvar|Y}} is an <math>n \times p</math> matrix of responses; {{mvar|T}} and {{mvar|U}} are <math>n \times l</math> matrices that are, respectively, '''projections''' of {{mvar|X}} (the X '''score''', ''component'' or '''factor matrix''') and projections of {{mvar|Y}} (the ''Y scores''); {{mvar|P}} and {{mvar|Q}} are, respectively, <math>m \times l</math> and <math>p \times l</math> orthogonal '''loading matrices'''; and matrices {{mvar|E}} and {{mvar|F}} are the error terms, assumed to be independent and identically distributed random normal variables. The decompositions of {{mvar|X}} and {{mvar|Y}} are made so as to maximise the '''covariance''' between {{mvar|T}} and {{mvar|U}} (projection matrices).
| |
| * [https://www.gokhanciflikli.com/post/learning-brexit/ Supervised vs. Unsupervised Learning: Exploring Brexit with PLS and PCA]
| |
| * [https://cran.r-project.org/web/packages/pls/index.html pls] R package
| |
| * [https://cran.r-project.org/web/packages/plsRcox/index.html plsRcox] R package (archived). See [[R#install_a_tar.gz_.28e.g._an_archived_package.29_from_a_local_directory|here]] for the installation.
| |
|
| |
| [https://web.stanford.edu/~hastie/ElemStatLearn//printings/ESLII_print12.pdf#page=101 PLS, PCR (principal components regression) and ridge regression tend to behave similarly]. Ridge regression may be preferred because it shrinks smoothly, rather than in discrete steps.
| |
|
| |
| = High dimension =
| |
| [https://projecteuclid.org/euclid.aos/1547197242 Partial least squares prediction in high-dimensional regression] Cook and Forzani, 2019
| |
|
| |
| == Feature selection ==
| |
| https://en.wikipedia.org/wiki/Feature_selection
| |
|
| |
| = [https://en.wikipedia.org/wiki/Independent_component_analysis Independent component analysis] =
| |
| ICA is another dimensionality reduction method.
| |
|
| |
| == ICA vs PCA ==
| |
|
| |
| == ICS vs FA ==
| |
|
| |
| = [https://en.wikipedia.org/wiki/Correspondence_analysis Correspondence analysis] =
| |
| https://francoishusson.wordpress.com/2017/07/18/multiple-correspondence-analysis-with-factominer/ and the book [https://www.crcpress.com/Exploratory-Multivariate-Analysis-by-Example-Using-R-Second-Edition/Husson-Le-Pages/p/book/9781138196346?tab=rev Exploratory Multivariate Analysis by Example Using R]
| |
|
| |
| = t-SNE =
| |
| t-Distributed Stochastic Neighbor Embedding (t-SNE) is a technique for dimensionality reduction that is particularly well suited for the visualization of high-dimensional datasets.
| |
|
| |
| * https://distill.pub/2016/misread-tsne/
| |
| * https://lvdmaaten.github.io/tsne/
| |
| * Application to [http://amp.pharm.mssm.edu/archs4/data.html ARCHS4]
| |
| * [https://www.codeproject.com/tips/788739/visualization-of-high-dimensional-data-using-t-sne Visualization of High Dimensional Data using t-SNE with R]
| |
| * http://blog.thegrandlocus.com/2018/08/a-tutorial-on-t-sne-1
| |
| * [https://intobioinformatics.wordpress.com/2019/05/30/quick-and-easy-t-sne-analysis-in-r/ Quick and easy t-SNE analysis in R]
| |
|
| |
| = Visualize the random effects =
| |
| http://www.quantumforest.com/2012/11/more-sense-of-random-effects/
| |
|
| |
| = [https://en.wikipedia.org/wiki/Calibration_(statistics) Calibration] =
| |
| * [https://stats.stackexchange.com/questions/43053/how-to-determine-calibration-accuracy-uncertainty-of-a-linear-regression How to determine calibration accuracy/uncertainty of a linear regression?]
| |
| * [https://chem.libretexts.org/Textbook_Maps/Analytical_Chemistry/Book%3A_Analytical_Chemistry_2.0_(Harvey)/05_Standardizing_Analytical_Methods/5.4%3A_Linear_Regression_and_Calibration_Curves Linear Regression and Calibration Curves]
| |
| * [https://www.webdepot.umontreal.ca/Usagers/sauves/MonDepotPublic/CHM%203103/LCGC%20Eur%20Burke%202001%20-%202%20de%204.pdf Regression and calibration] Shaun Burke
| |
| * [https://cran.r-project.org/web/packages/calibrate calibrate] package
| |
| * [https://diagnprognres.biomedcentral.com/articles/10.1186/s41512-018-0029-2 The index of prediction accuracy: an intuitive measure useful for evaluating risk prediction models] by Kattan and Gerds 2018. The following code demonstrates Figure 2. <syntaxhighlight lang='rsplus'>
| |
| # Odds ratio =1 and calibrated model
| |
| set.seed(666)
| |
| x = rnorm(1000)
| |
| z1 = 1 + 0*x
| |
| pr1 = 1/(1+exp(-z1))
| |
| y1 = rbinom(1000,1,pr1)
| |
| mean(y1) # .724, marginal prevalence of the outcome
| |
| dat1 <- data.frame(x=x, y=y1)
| |
| newdat1 <- data.frame(x=rnorm(1000), y=rbinom(1000, 1, pr1))
| |
|
| |
| # Odds ratio =1 and severely miscalibrated model
| |
| set.seed(666)
| |
| x = rnorm(1000)
| |
| z2 = -2 + 0*x
| |
| pr2 = 1/(1+exp(-z2))
| |
| y2 = rbinom(1000,1,pr2)
| |
| mean(y2) # .12
| |
| dat2 <- data.frame(x=x, y=y2)
| |
| newdat2 <- data.frame(x=rnorm(1000), y=rbinom(1000, 1, pr2))
| |
|
| |
| library(riskRegression)
| |
| lrfit1 <- glm(y ~ x, data = dat1, family = 'binomial')
| |
| IPA(lrfit1, newdata = newdat1)
| |
| # Variable Brier IPA IPA.gain
| |
| # 1 Null model 0.1984710 0.000000e+00 -0.003160010
| |
| # 2 Full model 0.1990982 -3.160010e-03 0.000000000
| |
| # 3 x 0.1984800 -4.534668e-05 -0.003114664
| |
| 1 - 0.1990982/0.1984710
| |
| # [1] -0.003160159
| |
|
| |
| lrfit2 <- glm(y ~ x, family = 'binomial')
| |
| IPA(lrfit2, newdata = newdat1)
| |
| # Variable Brier IPA IPA.gain
| |
| # 1 Null model 0.1984710 0.000000 -1.859333763
| |
| # 2 Full model 0.5674948 -1.859334 0.000000000
| |
| # 3 x 0.5669200 -1.856437 -0.002896299
| |
| 1 - 0.5674948/0.1984710
| |
| # [1] -1.859334
| |
| </syntaxhighlight> From the simulated data, we see IPA = -3.16e-3 for a calibrated model and IPA = -1.86 for a severely miscalibrated model.
| |
|
| |
| = ROC curve =
| |
| See [[ROC|ROC]].
| |
|
| |
| = [https://en.wikipedia.org/wiki/Net_reclassification_improvement NRI] (Net reclassification improvement) =
| |
|
| |
| = Maximum likelihood =
| |
| [http://stats.stackexchange.com/questions/622/what-is-the-difference-between-a-partial-likelihood-profile-likelihood-and-marg Difference of partial likelihood, profile likelihood and marginal likelihood]
| |
|
| |
| = Generalized Linear Model =
| |
| Lectures from a course in [http://people.stat.sfu.ca/~raltman/stat851.html Simon Fraser University Statistics].
| |
|
| |
| [https://petolau.github.io/Analyzing-double-seasonal-time-series-with-GAM-in-R/ Doing magic and analyzing seasonal time series with GAM (Generalized Additive Model) in R]
| |
|
| |
| == Link function ==
| |
| [http://www.win-vector.com/blog/2019/07/link-functions-versus-data-transforms/ Link Functions versus Data Transforms]
| |
|
| |
| == Quasi Likelihood ==
| |
| Quasi-likelihood is like log-likelihood. The quasi-score function (first derivative of quasi-likelihood function) is the estimating equation.
| |
|
| |
| * [http://www.stat.uchicago.edu/~pmcc/pubs/paper6.pdf Original paper] by Peter McCullagh.
| |
| * [http://people.stat.sfu.ca/~raltman/stat851/851L20.pdf Lecture 20] from SFU.
| |
| * [http://courses.washington.edu/b571/lectures/notes131-181.pdf U. Washington] and [http://faculty.washington.edu/heagerty/Courses/b571/handouts/OverdispQL.pdf another lecture] focuses on overdispersion.
| |
| * [http://www.maths.usyd.edu.au/u/jchan/GLM/QuasiLikelihood.pdf This lecture] contains a table of quasi likelihood from common distributions.
| |
|
| |
| == Plot ==
| |
| https://strengejacke.wordpress.com/2015/02/05/sjplot-package-and-related-online-manuals-updated-rstats-ggplot/
| |
|
| |
| == [https://en.wikipedia.org/wiki/Deviance_(statistics) Deviance], stats::deviance() and glmnet::deviance.glmnet() from R ==
| |
| * '''It is a generalization of the idea of using the sum of squares of residuals (RSS) in ordinary least squares''' to cases where model-fitting is achieved by maximum likelihood. See [https://stats.stackexchange.com/questions/6581/what-is-deviance-specifically-in-cart-rpart What is Deviance? (specifically in CART/rpart)] to manually compute deviance and compare it with the returned value of the '''deviance()''' function from a linear regression. Summary: deviance() = RSS in linear models.
| |
| * https://www.rdocumentation.org/packages/stats/versions/3.4.3/topics/deviance
| |
| * Likelihood ratio tests and the deviance http://data.princeton.edu/wws509/notes/a2.pdf#page=6
| |
| * Deviance(y,muhat) = 2*(loglik_saturated - loglik_proposed)
| |
| * [https://stats.stackexchange.com/questions/108995/interpreting-residual-and-null-deviance-in-glm-r Interpreting Residual and Null Deviance in GLM R]
| |
| ** Null Deviance = 2(LL(Saturated Model) - LL(Null Model)) on df = df_Sat - df_Null. The '''null deviance''' shows how well the response variable is predicted by a model that includes only the intercept (grand mean).
| |
| ** '''Residual Deviance = 2(LL(Saturated Model) - LL(Proposed Model)) = <math>2(LL(y|y) - LL(\hat{\mu}|y))</math>, df = df_Sat - df_Proposed=n-p'''. ==> deviance() has returned.
| |
| ** Null deviance > Residual deviance. Null deviance df = n-1. Residual deviance df = n-p.
| |
| <syntaxhighlight lang='rsplus'>
| |
| ## an example with offsets from Venables & Ripley (2002, p.189)
| |
| utils::data(anorexia, package = "MASS")
| |
|
| |
| anorex.1 <- glm(Postwt ~ Prewt + Treat + offset(Prewt),
| |
| family = gaussian, data = anorexia)
| |
| summary(anorex.1)
| |
|
| |
| # Call:
| |
| # glm(formula = Postwt ~ Prewt + Treat + offset(Prewt), family = gaussian,
| |
| # data = anorexia)
| |
| #
| |
| # Deviance Residuals:
| |
| # Min 1Q Median 3Q Max
| |
| # -14.1083 -4.2773 -0.5484 5.4838 15.2922
| |
| #
| |
| # Coefficients:
| |
| # Estimate Std. Error t value Pr(>|t|)
| |
| # (Intercept) 49.7711 13.3910 3.717 0.000410 ***
| |
| # Prewt -0.5655 0.1612 -3.509 0.000803 ***
| |
| # TreatCont -4.0971 1.8935 -2.164 0.033999 *
| |
| # TreatFT 4.5631 2.1333 2.139 0.036035 *
| |
| # ---
| |
| # Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
| |
| #
| |
| # (Dispersion parameter for gaussian family taken to be 48.69504)
| |
| #
| |
| # Null deviance: 4525.4 on 71 degrees of freedom
| |
| # Residual deviance: 3311.3 on 68 degrees of freedom
| |
| # AIC: 489.97
| |
| #
| |
| # Number of Fisher Scoring iterations: 2
| |
|
| |
| deviance(anorex.1)
| |
| # [1] 3311.263
| |
| </syntaxhighlight>
| |
| * In glmnet package. The deviance is defined to be 2*(loglike_sat - loglike), where loglike_sat is the log-likelihood for the saturated model (a model with a free parameter per observation). Null deviance is defined to be 2*(loglike_sat -loglike(Null)); The NULL model refers to the intercept model, except for the Cox, where it is the 0 model. Hence dev.ratio=1-deviance/nulldev, and this deviance method returns (1-dev.ratio)*nulldev.
| |
| ** [https://stats.stackexchange.com/questions/134694/what-deviance-is-glmnet-using-to-compare-values-of-lambda What deviance is glmnet using to compare values of λ?]
| |
| <syntaxhighlight lang='rsplus'>
| |
| x=matrix(rnorm(100*2),100,2)
| |
| y=rnorm(100)
| |
| fit1=glmnet(x,y)
| |
| deviance(fit1) # one for each lambda
| |
| # [1] 98.83277 98.53893 98.29499 98.09246 97.92432 97.78472 97.66883
| |
| # [8] 97.57261 97.49273 97.41327 97.29855 97.20332 97.12425 97.05861
| |
| # ...
| |
| # [57] 96.73772 96.73770
| |
| fit2 <- glmnet(x, y, lambda=.1) # fix lambda
| |
| deviance(fit2)
| |
| # [1] 98.10212
| |
| deviance(glm(y ~ x))
| |
| # [1] 96.73762
| |
| sum(residuals(glm(y ~ x))^2)
| |
| # [1] 96.73762
| |
| </syntaxhighlight>
| |
|
| |
| == Saturated model ==
| |
| * The saturated model always has n parameters where n is the sample size.
| |
| * [https://stats.stackexchange.com/questions/114073/logistic-regression-how-to-obtain-a-saturated-model Logistic Regression : How to obtain a saturated model]
| |
|
| |
| = Simulate data =
| |
| == Density plot ==
| |
| <syntaxhighlight lang='rsplus'>
| |
| # plot a Weibull distribution with shape and scale
| |
| func <- function(x) dweibull(x, shape = 1, scale = 3.38)
| |
| curve(func, .1, 10)
| |
|
| |
| func <- function(x) dweibull(x, shape = 1.1, scale = 3.38)
| |
| curve(func, .1, 10)
| |
| </syntaxhighlight>
| |
|
| |
| The shape parameter plays a role on the shape of the density function and the failure rate.
| |
|
| |
| * Shape <=1: density is convex, not a hat shape.
| |
| * Shape =1: failure rate (hazard function) is constant. [https://en.wikipedia.org/wiki/Exponential_distribution Exponential distribution].
| |
| * Shape >1: failure rate increases with time
| |
|
| |
| == Simulate data from a specified density ==
| |
| * http://stackoverflow.com/questions/16134786/simulate-data-from-non-standard-density-function
| |
|
| |
| == Signal to noise ratio ==
| |
| * https://en.wikipedia.org/wiki/Signal-to-noise_ratio
| |
| * https://stats.stackexchange.com/questions/31158/how-to-simulate-signal-noise-ratio
| |
| : <math>\frac{\sigma^2_{signal}}{\sigma^2_{noise}} = \frac{Var(f(X))}{Var(e)} </math> if Y = f(X) + e
| |
| * Page 401 of ESLII (https://web.stanford.edu/~hastie/ElemStatLearn//) 12th print.
| |
|
| |
| Some examples of signal to noise ratio
| |
| * ESLII_print12.pdf: .64, 5, 4
| |
| * Yuan and Lin 2006: 1.8, 3
| |
| * [https://academic.oup.com/biostatistics/article/19/3/263/4093306#123138354 A framework for estimating and testing qualitative interactions with applications to predictive biomarkers] Roth, Biostatistics, 2018
| |
|
| |
| == Effect size, Cohen's d and volcano plot ==
| |
| * https://en.wikipedia.org/wiki/Effect_size (See also the estimation by the [[#Two_sample_test_assuming_equal_variance|pooled sd]])
| |
|
| |
| : <math>\theta = \frac{\mu_1 - \mu_2} \sigma,</math>
| |
|
| |
| * [https://learningstatisticswithr.com/book/hypothesistesting.html#effectsize Effect size, sample size and power] from Learning statistics with R: A tutorial for psychology students and other beginners.
| |
| * [https://en.wikipedia.org/wiki/Effect_size#t-test_for_mean_difference_between_two_independent_groups t-statistic and Cohen's d] for the case of mean difference between two independent groups
| |
| * [http://www.win-vector.com/blog/2019/06/cohens-d-for-experimental-planning/ Cohen’s D for Experimental Planning]
| |
| * [https://en.wikipedia.org/wiki/Volcano_plot_(statistics) Volcano plot]
| |
| ** Y-axis: -log(p)
| |
| ** X-axis: log2 fold change OR effect size (Cohen's D). [https://twitter.com/biobenkj/status/1072141825568329728 An example] from RNA-Seq data.
| |
|
| |
| = Multiple comparisons =
| |
| * If you perform experiments over and over, you's bound to find something. So significance level must be adjusted down when performing multiple hypothesis tests.
| |
| * http://www.gs.washington.edu/academics/courses/akey/56008/lecture/lecture10.pdf
| |
| * Book 'Multiple Comparison Using R' by Bretz, Hothorn and Westfall, 2011.
| |
| * [http://varianceexplained.org/statistics/interpreting-pvalue-histogram/ Plot a histogram of p-values], a post from varianceexplained.org. The anti-conservative histogram (tail on the RHS) is what we have typically seen in e.g. microarray gene expression data.
| |
| * [http://statistic-on-air.blogspot.com/2015/01/adjustment-for-multiple-comparison.html Comparison of different ways of multiple-comparison] in R.
| |
|
| |
| Take an example, Suppose 550 out of 10,000 genes are significant at .05 level
| |
| # P-value < .05 ==> Expect .05*10,000=500 false positives
| |
| # False discovery rate < .05 ==> Expect .05*550 =27.5 false positives
| |
| # Family wise error rate < .05 ==> The probablity of at least 1 false positive <.05
| |
|
| |
| According to [https://www.cancer.org/cancer/cancer-basics/lifetime-probability-of-developing-or-dying-from-cancer.html Lifetime Risk of Developing or Dying From Cancer], there is a 39.7% risk of developing a cancer for male during his lifetime (in other words, 1 out of every 2.52 men in US will develop some kind of cancer during his lifetime) and 37.6% for female. So the probability of getting at least one cancer patient in a 3-generation family is 1-.6**3 - .63**3 = 0.95.
| |
|
| |
| == False Discovery Rate ==
| |
| * https://en.wikipedia.org/wiki/False_discovery_rate
| |
| * Paper [http://www.stat.purdue.edu/~doerge/BIOINFORM.D/FALL06/Benjamini%20and%20Y%20FDR.pdf Definition] by Benjamini and Hochberg in JRSS B 1995.
| |
| * A [http://xkcd.com/882/ comic]
| |
| * P-value vs false discovery rate vs family wise error rate. See [http://jtleek.com/talks 10 statistics tip] or [http://www.biostat.jhsph.edu/~jleek/teaching/2011/genomics/mt140688.pdf#page=14 Statistics for Genomics (140.688)] from Jeff Leek. Suppose 550 out of 10,000 genes are significant at .05 level
| |
| ** P-value < .05 implies expecting .05*10000 = 500 false positives
| |
| ** False discovery rate < .05 implies expecting .05*550 = 27.5 false positives
| |
| ** Family wise error rate (P (# of false positives ≥ 1)) < .05. See [https://riffyn.com/riffyn-blog/2017/10/29/family-wise-error-rate Understanding Family-Wise Error Rate]
| |
| * [http://www.pnas.org/content/100/16/9440.full Statistical significance for genomewide studies] by Storey and Tibshirani.
| |
| * [http://www.nicebread.de/whats-the-probability-that-a-significant-p-value-indicates-a-true-effect/ What’s the probability that a significant p-value indicates a true effect?]
| |
| * http://onetipperday.sterding.com/2015/12/my-note-on-multiple-testing.html
| |
| * [https://www.biorxiv.org/content/early/2018/10/31/458786 A practical guide to methods controlling false discoveries in computational biology] by Korthauer, et al 2018, [https://rdcu.be/bFEt2 BMC Genome Biology] 2019
| |
| * [https://academic.oup.com/bioinformatics/advance-article/doi/10.1093/bioinformatics/btz191/5380770 onlineFDR]: an R package to control the false discovery rate for growing data repositories
| |
|
| |
| Suppose <math>p_1 \leq p_2 \leq ... \leq p_n</math>. Then
| |
| : <math>
| |
| \text{FDR}_i = \text{min}(1, n* p_i/i)
| |
| </math>.
| |
| So if the number of tests (<math>n</math>) is large and/or the original p value (<math>p_i</math>) is large, then FDR can hit the value 1.
| |
|
| |
| However, the simple formula above does not guarantee the monotonicity property from the FDR. So the calculation in R is more complicated. See [https://stackoverflow.com/questions/29992944/how-does-r-calculate-the-false-discovery-rate How Does R Calculate the False Discovery Rate].
| |
|
| |
| Below is the histograms of p-values and FDR (BH adjusted) from a real data (Pomeroy in BRB-ArrayTools).
| |
|
| |
| [[:File:Hist bh.svg]]
| |
|
| |
| And the next is a scatterplot w/ histograms on the margins from a null data.
| |
|
| |
| [[:File:Scatterhist.svg]]
| |
|
| |
| == q-value ==
| |
| q-value is defined as the minimum FDR that can be attained when calling that '''feature''' significant (i.e., expected proportion of false positives incurred when calling that feature significant).
| |
|
| |
| If gene X has a q-value of 0.013 it means that 1.3% of genes that show p-values at least as small as gene X are false positives.
| |
|
| |
| == SAM/Significance Analysis of Microarrays ==
| |
| The percentile option is used to define the number of falsely called genes based on 'B' permutations. If we use the 90-th percentile, the number of significant genes will be less than if we use the 50-th percentile/median.
| |
|
| |
| In BRCA dataset, using the 90-th percentile will get 29 genes vs 183 genes if we use median.
| |
|
| |
| == Multivariate permutation test ==
| |
| In BRCA dataset, using 80% confidence gives 116 genes vs 237 genes if we use 50% confidence (assuming maximum proportion of false discoveries is 10%). The method is published on [http://www.sciencedirect.com/science/article/pii/S0378375803002118 EL Korn, JF Troendle, LM McShane and R Simon, ''Controlling the number of false discoveries: Application to high dimensional genomic data'', Journal of Statistical Planning and Inference, vol 124, 379-398 (2004)].
| |
|
| |
| == String Permutations Algorithm ==
| |
| https://youtu.be/nYFd7VHKyWQ
| |
|
| |
| == Empirical Bayes Normal Means Problem with Correlated Noise ==
| |
| [https://arxiv.org/abs/1812.07488 Solving the Empirical Bayes Normal Means Problem with Correlated Noise] Sun 2018
| |
|
| |
| The package [https://github.com/LSun/cashr cashr] and the [https://github.com/LSun/cashr_paper source code of the paper]
| |
|
| |
| = Bayes =
| |
| == Bayes factor ==
| |
| * http://www.nicebread.de/what-does-a-bayes-factor-feel-like/
| |
|
| |
| == Empirical Bayes method ==
| |
| * http://en.wikipedia.org/wiki/Empirical_Bayes_method
| |
| * [http://varianceexplained.org/r/empirical-bayes-book/ Introduction to Empirical Bayes: Examples from Baseball Statistics]
| |
|
| |
| == Naive Bayes classifier ==
| |
| [http://r-posts.com/understanding-naive-bayes-classifier-using-r/ Understanding Naïve Bayes Classifier Using R]
| |
|
| |
| == MCMC ==
| |
| [https://stablemarkets.wordpress.com/2018/03/16/speeding-up-metropolis-hastings-with-rcpp/ Speeding up Metropolis-Hastings with Rcpp]
| |
|
| |
| = offset() function =
| |
| * An '''offset''' is a term to be added to a linear predictor, such as in a generalised linear model, with known coefficient 1 rather than an estimated coefficient.
| |
| * https://www.rdocumentation.org/packages/stats/versions/3.5.0/topics/offset
| |
|
| |
| == Offset in Poisson regression ==
| |
| * http://rfunction.com/archives/223
| |
| * https://stats.stackexchange.com/questions/11182/when-to-use-an-offset-in-a-poisson-regression
| |
|
| |
| # We need to model '''rates''' instead of '''counts'''
| |
| # More generally, you use offsets because the '''units''' of observation are different in some dimension (different populations, different geographic sizes) and the outcome is proportional to that dimension.
| |
|
| |
| An example from [http://rfunction.com/archives/223 here]
| |
| <syntaxhighlight lang='rsplus'>
| |
| Y <- c(15, 7, 36, 4, 16, 12, 41, 15)
| |
| N <- c(4949, 3534, 12210, 344, 6178, 4883, 11256, 7125)
| |
| x1 <- c(-0.1, 0, 0.2, 0, 1, 1.1, 1.1, 1)
| |
| x2 <- c(2.2, 1.5, 4.5, 7.2, 4.5, 3.2, 9.1, 5.2)
| |
|
| |
| glm(Y ~ offset(log(N)) + (x1 + x2), family=poisson) # two variables
| |
| # Coefficients:
| |
| # (Intercept) x1 x2
| |
| # -6.172 -0.380 0.109
| |
| #
| |
| # Degrees of Freedom: 7 Total (i.e. Null); 5 Residual
| |
| # Null Deviance: 10.56
| |
| # Residual Deviance: 4.559 AIC: 46.69
| |
| glm(Y ~ offset(log(N)) + I(x1+x2), family=poisson) # one variable
| |
| # Coefficients:
| |
| # (Intercept) I(x1 + x2)
| |
| # -6.12652 0.04746
| |
| #
| |
| # Degrees of Freedom: 7 Total (i.e. Null); 6 Residual
| |
| # Null Deviance: 10.56
| |
| # Residual Deviance: 8.001 AIC: 48.13
| |
| </syntaxhighlight>
| |
|
| |
| == Offset in Cox regression ==
| |
| An example from [https://github.com/cran/biospear/blob/master/R/PCAlasso.R biospear::PCAlasso()]
| |
| <syntaxhighlight lang='rsplus'>
| |
| coxph(Surv(time, status) ~ offset(off.All), data = data)
| |
| # Call: coxph(formula = Surv(time, status) ~ offset(off.All), data = data)
| |
| #
| |
| # Null model
| |
| # log likelihood= -2391.736
| |
| # n= 500
| |
|
| |
| # versus without using offset()
| |
| coxph(Surv(time, status) ~ off.All, data = data)
| |
| # Call:
| |
| # coxph(formula = Surv(time, status) ~ off.All, data = data)
| |
| #
| |
| # coef exp(coef) se(coef) z p
| |
| # off.All 0.485 1.624 0.658 0.74 0.46
| |
| #
| |
| # Likelihood ratio test=0.54 on 1 df, p=0.5
| |
| # n= 500, number of events= 438
| |
| coxph(Surv(time, status) ~ off.All, data = data)$loglik
| |
| # [1] -2391.702 -2391.430 # initial coef estimate, final coef
| |
| </syntaxhighlight>
| |
|
| |
| == Offset in linear regression ==
| |
| * https://www.rdocumentation.org/packages/stats/versions/3.5.1/topics/lm
| |
| * https://stackoverflow.com/questions/16920628/use-of-offset-in-lm-regression-r
| |
|
| |
| = Overdispersion =
| |
| https://en.wikipedia.org/wiki/Overdispersion
| |
|
| |
| Var(Y) = phi * E(Y). If phi > 1, then it is overdispersion relative to Poisson. If phi <1, we have under-dispersion (rare).
| |
|
| |
| == Heterogeneity ==
| |
| The Poisson model fit is not good; residual deviance/df >> 1. The lack of fit maybe due to missing data, covariates or overdispersion.
| |
|
| |
| Subjects within each covariate combination still differ greatly.
| |
|
| |
| *https://onlinecourses.science.psu.edu/stat504/node/169.
| |
| * https://onlinecourses.science.psu.edu/stat504/node/162
| |
|
| |
| Consider Quasi-Poisson or negative binomial.
| |
|
| |
| == Test of overdispersion or underdispersion in Poisson models ==
| |
| https://stats.stackexchange.com/questions/66586/is-there-a-test-to-determine-whether-glm-overdispersion-is-significant
| |
|
| |
| == Negative Binomial ==
| |
| The mean of the Poisson distribution can itself be thought of as a random variable drawn from the gamma distribution thereby introducing an additional free parameter.
| |
|
| |
| == Binomial ==
| |
| * [https://www.rdatagen.net/post/overdispersed-binomial-data/ Generating and modeling over-dispersed binomial data]
| |
| * [https://cran.r-project.org/web/packages/simstudy/index.html simstudy] package. The final data sets can represent data from '''randomized control trials''', '''repeated measure (longitudinal) designs''', and cluster randomized trials. Missingness can be generated using various mechanisms (MCAR, MAR, NMAR). [https://www.rdatagen.net/post/analyzing-a-binary-outcome-in-a-study-with-within-cluster-pair-matched-randomization/ Analyzing a binary outcome arising out of within-cluster, pair-matched randomization].
| |
|
| |
| = Count data =
| |
| == Zero counts ==
| |
| * [https://doi.org/10.1080/00031305.2018.1444673 A Method to Handle Zero Counts in the Multinomial Model]
| |
|
| |
| == Bias ==
| |
| [https://amstat.tandfonline.com/doi/full/10.1080/00031305.2018.1564699 Bias in Small-Sample Inference With Count-Data Models] Blackburn 2019
| |
|
| |
| = Survival data analysis =
| |
| See [[Survival_data|Survival data analysis]]
| |
|
| |
| = Logistic regression =
| |
|
| |
| == Simulate binary data from the logistic model ==
| |
| https://stats.stackexchange.com/questions/46523/how-to-simulate-artificial-data-for-logistic-regression
| |
| <syntaxhighlight lang='rsplus'>
| |
| set.seed(666)
| |
| x1 = rnorm(1000) # some continuous variables
| |
| x2 = rnorm(1000)
| |
| z = 1 + 2*x1 + 3*x2 # linear combination with a bias
| |
| pr = 1/(1+exp(-z)) # pass through an inv-logit function
| |
| y = rbinom(1000,1,pr) # bernoulli response variable
| |
|
| |
| #now feed it to glm:
| |
| df = data.frame(y=y,x1=x1,x2=x2)
| |
| glm( y~x1+x2,data=df,family="binomial")
| |
| </syntaxhighlight>
| |
|
| |
| == Building a Logistic Regression model from scratch ==
| |
| https://www.analyticsvidhya.com/blog/2015/10/basics-logistic-regression
| |
|
| |
| == Odds ratio ==
| |
| Calculate the odds ratio from the coefficient estimates; see [https://stats.stackexchange.com/questions/8661/logistic-regression-in-r-odds-ratio this post].
| |
| <syntaxhighlight lang='rsplus'>
| |
| require(MASS)
| |
| N <- 100 # generate some data
| |
| X1 <- rnorm(N, 175, 7)
| |
| X2 <- rnorm(N, 30, 8)
| |
| X3 <- abs(rnorm(N, 60, 30))
| |
| Y <- 0.5*X1 - 0.3*X2 - 0.4*X3 + 10 + rnorm(N, 0, 12)
| |
|
| |
| # dichotomize Y and do logistic regression
| |
| Yfac <- cut(Y, breaks=c(-Inf, median(Y), Inf), labels=c("lo", "hi"))
| |
| glmFit <- glm(Yfac ~ X1 + X2 + X3, family=binomial(link="logit"))
| |
|
| |
| exp(cbind(coef(glmFit), confint(glmFit)))
| |
| </syntaxhighlight>
| |
|
| |
| == AUC ==
| |
| [https://hopstat.wordpress.com/2014/12/19/a-small-introduction-to-the-rocr-package/ A small introduction to the ROCR package]
| |
| <pre>
| |
| predict.glm() ROCR::prediction() ROCR::performance()
| |
| glmobj ------------> predictTest -----------------> ROCPPred ---------> AUC
| |
| newdata labels
| |
| </pre>
| |
|
| |
| = Medical applications =
| |
| == Subgroup analysis ==
| |
| Other related keywords: recursive partitioning, randomized clinical trials (RCT)
| |
|
| |
| * [https://www.rdatagen.net/post/sub-group-analysis-in-rct/ Thinking about different ways to analyze sub-groups in an RCT]
| |
| * [http://onlinelibrary.wiley.com/doi/10.1002/sim.7064/full Tutorial in biostatistics: data-driven subgroup identification and analysis in clinical trials] I Lipkovich, A Dmitrienko - Statistics in medicine, 2017
| |
| * Personalized medicine:Four perspectives of tailored medicine SJ Ruberg, L Shen - Statistics in Biopharmaceutical Research, 2015
| |
| * Berger, J. O., Wang, X., and Shen, L. (2014), “A Bayesian Approach to Subgroup Identification,” Journal of Biopharmaceutical Statistics, 24, 110–129.
| |
| * [https://rpsychologist.com/treatment-response-subgroup Change over time is not "treatment response"]
| |
|
| |
| == Interaction analysis ==
| |
| * Goal: '''assessing the predictiveness of biomarkers''' by testing their '''interaction (strength) with the treatment'''.
| |
| * [https://onlinelibrary.wiley.com/doi/epdf/10.1002/sim.7608 Evaluation of biomarkers for treatment selection usingindividual participant data from multiple clinical trials] Kang et al 2018
| |
| * http://www.stat.purdue.edu/~ghobbs/STAT_512/Lecture_Notes/ANOVA/Topic_27.pdf#page=15. For survival data, y-axis is the survival time and B1=treatment, B2=control and X-axis is treatment-effect modifying score. But as seen on [http://www.stat.purdue.edu/~ghobbs/STAT_512/Lecture_Notes/ANOVA/Topic_27.pdf#page=16 page16], the effects may not be separated.
| |
| * [http://onlinelibrary.wiley.com/doi/10.1002/bimj.201500234/full Identification of biomarker-by-treatment interactions in randomized clinical trials with survival outcomes and high-dimensional spaces] N Ternès, F Rotolo, G Heinze, S Michiels - Biometrical Journal, 2017
| |
| * [https://onlinelibrary.wiley.com/doi/epdf/10.1002/sim.6564 Designing a study to evaluate the benefitof a biomarker for selectingpatient treatment] Janes 2015
| |
| * [https://onlinelibrary.wiley.com/doi/epdf/10.1002/pst.1728 A visualization method measuring theperformance of biomarkers for guidingtreatment decisions] Yang et al 2015. Predictiveness curves were used a lot.
| |
| * [https://onlinelibrary.wiley.com/doi/epdf/10.1111/biom.12191 Combining Biomarkers to Optimize Patient TreatmentRecommendations] Kang et al 2014. Several simulations are conducted.
| |
| * [https://www.ncbi.nlm.nih.gov/pubmed/24695044 An approach to evaluating and comparing biomarkers for patient treatment selection] Janes et al 2014
| |
| * [http://journals.sagepub.com/doi/pdf/10.1177/0272989X13493147 A Framework for Evaluating Markers Used to Select Patient Treatment] Janes et al 2014
| |
| * Tian, L., Alizaden, A. A., Gentles, A. J., and Tibshirani, R. (2014) “A Simple Method for Detecting Interactions Between a Treatment and a Large Number of Covariates,” and the [https://books.google.com/books?hl=en&lr=&id=2gG3CgAAQBAJ&oi=fnd&pg=PA79&ots=y5LqF3vk-T&sig=r2oaOxf9gcjK-1bvFHVyfvwscP8#v=onepage&q&f=true book chapter].
| |
| * [https://biostats.bepress.com/cgi/viewcontent.cgi?article=1228&context=uwbiostat Statistical Methods for Evaluating and Comparing Biomarkers for Patient Treatment Selection] Janes et al 2013
| |
| * [https://onlinelibrary.wiley.com/doi/epdf/10.1111/j.1541-0420.2011.01722.x Assessing Treatment-Selection Markers using a Potential Outcomes Framework] Huang et al 2012
| |
| * [https://biostats.bepress.com/cgi/viewcontent.cgi?article=1223&context=uwbiostat Methods for Evaluating Prediction Performance of Biomarkers and Tests] Pepe et al 2012
| |
| * Measuring the performance of markers for guiding treatment decisions by Janes, et al 2011. <syntaxhighlight lang='rsplus'>
| |
| cf <- c(2, 1, .5, 0)
| |
| f1 <- function(x) { z <- cf[1] + cf[3] + (cf[2]+cf[4])*x; 1/ (1 + exp(-z)) }
| |
| f0 <- function(x) { z <- cf[1] + cf[2]*x; 1/ (1 + exp(-z)) }
| |
| par(mfrow=c(1,3))
| |
| curve(f1, -3, 3, col = 'red', ylim = c(0, 1),
| |
| ylab = '5-year DFS Rate', xlab = 'Marker A/D Value',
| |
| main = 'Predictiveness Curve', lwd = 2)
| |
| curve(f0, -3, 3, col = 'black', ylim = c(0, 1),
| |
| xlab = '', ylab = '', lwd = 2, add = TRUE)
| |
| legend(.5, .4, c("control", "treatment"),
| |
| col = c("black", "red"), lwd = 2)
| |
|
| |
| cf <- c(.1, 1, -.1, .5)
| |
| curve(f1, -3, 3, col = 'red', ylim = c(0, 1),
| |
| ylab = '5-year DFS Rate', xlab = 'Marker G Value',
| |
| main = 'Predictiveness Curve', lwd = 2)
| |
| curve(f0, -3, 3, col = 'black', ylim = c(0, 1),
| |
| xlab = '', ylab = '', lwd = 2, add = TRUE)
| |
| legend(.5, .4, c("control", "treatment"),
| |
| col = c("black", "red"), lwd = 2)
| |
| abline(v= - cf[3]/cf[4], lty = 2)
| |
|
| |
| cf <- c(1, -1, 1, 2)
| |
| curve(f1, -3, 3, col = 'red', ylim = c(0, 1),
| |
| ylab = '5-year DFS Rate', xlab = 'Marker B Value',
| |
| main = 'Predictiveness Curve', lwd = 2)
| |
| curve(f0, -3, 3, col = 'black', ylim = c(0, 1),
| |
| xlab = '', ylab = '', lwd = 2, add = TRUE)
| |
| legend(.5, .85, c("control", "treatment"),
| |
| col = c("black", "red"), lwd = 2)
| |
| abline(v= - cf[3]/cf[4], lty = 2)
| |
| </syntaxhighlight> [[:File:PredcurveLogit.svg]]
| |
| * [https://www.degruyter.com/downloadpdf/j/ijb.2014.10.issue-1/ijb-2012-0052/ijb-2012-0052.pdf An Approach to Evaluating and Comparing Biomarkers for Patient Treatment Selection] The International Journal of Biostatistics by Janes, 2014. Y-axis is risk given marker, not P(T > t0|X). Good details.
| |
| * Gunter, L., Zhu, J., and Murphy, S. (2011), “Variable Selection for Qualitative Interactions in Personalized Medicine While Controlling the Family-Wise Error Rate,” Journal of Biopharmaceutical Statistics, 21, 1063–1078.
| |
|
| |
| = Statistical Learning =
| |
| * [http://statweb.stanford.edu/~tibs/ElemStatLearn/ Elements of Statistical Learning] Book homepage
| |
| * [http://heather.cs.ucdavis.edu/draftregclass.pdf From Linear Models to Machine Learning] by Norman Matloff
| |
| * [http://www.kdnuggets.com/2017/04/10-free-must-read-books-machine-learning-data-science.html 10 Free Must-Read Books for Machine Learning and Data Science]
| |
| * [https://towardsdatascience.com/the-10-statistical-techniques-data-scientists-need-to-master-1ef6dbd531f7 10 Statistical Techniques Data Scientists Need to Master]
| |
| *# Linear regression
| |
| *# Classification: Logistic Regression, Linear Discriminant Analysis, Quadratic Discriminant Analysis
| |
| *# Resampling methods: Bootstrapping and Cross-Validation
| |
| *# Subset selection: Best-Subset Selection, Forward Stepwise Selection, Backward Stepwise Selection, Hybrid Methods
| |
| *# Shrinkage/regularization: Ridge regression, Lasso
| |
| *# Dimension reduction: Principal Components Regression, Partial least squares
| |
| *# Nonlinear models: Piecewise function, Spline, generalized additive model
| |
| *# Tree-based methods: Bagging, Boosting, Random Forest
| |
| *# Support vector machine
| |
| *# Unsupervised learning: PCA, k-means, Hierarchical
| |
| * [https://www.listendata.com/2018/03/regression-analysis.html?m=1 15 Types of Regression you should know]
| |
|
| |
| == LDA (Fisher's linear discriminant), QDA ==
| |
| * https://en.wikipedia.org/wiki/Linear_discriminant_analysis
| |
| * [https://datascienceplus.com/how-to-perform-logistic-regression-lda-qda-in-r/ How to perform Logistic Regression, LDA, & QDA in R]
| |
| * [http://r-posts.com/discriminant-analysis-statistics-all-the-way/ Discriminant Analysis: Statistics All The Way]
| |
| * [https://onlinelibrary.wiley.com/doi/10.1111/biom.13065 Multiclass linear discriminant analysis with ultrahigh‐dimensional features] Li 2019
| |
|
| |
| == Bagging ==
| |
| Chapter 8 of the book.
| |
|
| |
| * Bootstrap mean is approximately a posterior average.
| |
| * Bootstrap aggregation or bagging average: Average the prediction over a collection of bootstrap samples, thereby reducing its variance. The bagging estimate is defined by
| |
| :<math>\hat{f}_{bag}(x) = \frac{1}{B}\sum_{b=1}^B \hat{f}^{*b}(x).</math>
| |
|
| |
| [https://statcompute.wordpress.com/2016/01/02/where-bagging-might-work-better-than-boosting/ Where Bagging Might Work Better Than Boosting]
| |
|
| |
| [https://freakonometrics.hypotheses.org/52777 CLASSIFICATION FROM SCRATCH, BAGGING AND FORESTS 10/8]
| |
|
| |
| == Boosting ==
| |
| * Ch8.2 Bagging, Random Forests and Boosting of [http://www-bcf.usc.edu/~gareth/ISL/ An Introduction to Statistical Learning] and the [http://www-bcf.usc.edu/~gareth/ISL/Chapter%208%20Lab.txt code].
| |
| * [http://freakonometrics.hypotheses.org/19874 An Attempt To Understand Boosting Algorithm]
| |
| * [http://cran.r-project.org/web/packages/gbm/index.html gbm] package. An implementation of extensions to Freund and Schapire's '''AdaBoost algorithm''' and Friedman's '''gradient boosting machine'''. Includes regression methods for least squares, absolute loss, t-distribution loss, [http://mathewanalytics.com/2015/11/13/applied-statistical-theory-quantile-regression/ quantile regression], logistic, multinomial logistic, Poisson, Cox proportional hazards partial likelihood, AdaBoost exponential loss, Huberized hinge loss, and Learning to Rank measures (LambdaMart).
| |
| * https://www.biostat.wisc.edu/~kendzior/STAT877/illustration.pdf
| |
| * http://www.is.uni-freiburg.de/ressourcen/business-analytics/10_ensemblelearning.pdf and [http://www.is.uni-freiburg.de/ressourcen/business-analytics/homework_ensemblelearning_questions.pdf exercise]
| |
| * [https://freakonometrics.hypotheses.org/52782 Classification from scratch]
| |
|
| |
| === AdaBoost ===
| |
| AdaBoost.M1 by Freund and Schapire (1997):
| |
|
| |
| The error rate on the training sample is
| |
| <math>
| |
| \bar{err} = \frac{1}{N} \sum_{i=1}^N I(y_i \neq G(x_i)),
| |
| </math>
| |
|
| |
| Sequentially apply the weak classification algorithm to repeatedly modified versions of the data, thereby producing a sequence of weak classifiers <math>G_m(x), m=1,2,\dots,M.</math>
| |
|
| |
| The predictions from all of them are combined through a weighted majority vote to produce the final prediction:
| |
| <math>
| |
| G(x) = sign[\sum_{m=1}^M \alpha_m G_m(x)].
| |
| </math>
| |
| Here <math> \alpha_1,\alpha_2,\dots,\alpha_M</math> are computed by the boosting algorithm and weight the contribution of each respective <math>G_m(x)</math>. Their effect is to give higher influence to the more accurate classifiers in the sequence.
| |
|
| |
| === Dropout regularization ===
| |
| [https://statcompute.wordpress.com/2017/08/20/dart-dropout-regularization-in-boosting-ensembles/ DART: Dropout Regularization in Boosting Ensembles]
| |
|
| |
| === Gradient boosting ===
| |
| * https://en.wikipedia.org/wiki/Gradient_boosting
| |
| * [https://shirinsplayground.netlify.com/2018/11/ml_basics_gbm/ Machine Learning Basics - Gradient Boosting & XGBoost]
| |
| * [http://www.sthda.com/english/articles/35-statistical-machine-learning-essentials/139-gradient-boosting-essentials-in-r-using-xgboost/ Gradient Boosting Essentials in R Using XGBOOST]
| |
|
| |
| == Gradient descent ==
| |
| Gradient descent is a first-order iterative optimization algorithm for finding the minimum of a function ([https://en.wikipedia.org/wiki/Gradient_descent Wikipedia]).
| |
|
| |
| * [https://spin.atomicobject.com/2014/06/24/gradient-descent-linear-regression/ An Introduction to Gradient Descent and Linear Regression] Easy to understand based on simple linear regression. Code is provided too.
| |
| * [http://gradientdescending.com/applying-gradient-descent-primer-refresher/ Applying gradient descent – primer / refresher]
| |
| * [http://sebastianruder.com/optimizing-gradient-descent/index.html An overview of Gradient descent optimization algorithms]
| |
| * [https://www.analyticsvidhya.com/blog/2016/01/complete-tutorial-ridge-lasso-regression-python/ A Complete Tutorial on Ridge and Lasso Regression in Python]
| |
| * How to choose the learning rate?
| |
| ** [http://openclassroom.stanford.edu/MainFolder/DocumentPage.php?course=MachineLearning&doc=exercises/ex3/ex3.html Machine learning] from Andrew Ng
| |
| ** http://scikit-learn.org/stable/modules/sgd.html
| |
| * R packages
| |
| ** https://cran.r-project.org/web/packages/gradDescent/index.html, https://www.rdocumentation.org/packages/gradDescent/versions/2.0
| |
| ** https://cran.r-project.org/web/packages/sgd/index.html
| |
|
| |
| The error function from a simple linear regression looks like
| |
| : <math>
| |
| \begin{align}
| |
| Err(m,b) &= \frac{1}{N}\sum_{i=1}^n (y_i - (m x_i + b))^2, \\
| |
| \end{align}
| |
| </math>
| |
|
| |
| We compute the gradient first for each parameters.
| |
| : <math>
| |
| \begin{align}
| |
| \frac{\partial Err}{\partial m} &= \frac{2}{n} \sum_{i=1}^n -x_i(y_i - (m x_i + b)), \\
| |
| \frac{\partial Err}{\partial b} &= \frac{2}{n} \sum_{i=1}^n -(y_i - (m x_i + b))
| |
| \end{align}
| |
| </math>
| |
|
| |
| The gradient descent algorithm uses an iterative method to update the estimates using a tuning parameter called '''learning rate'''.
| |
| <pre>
| |
| new_m &= m_current - (learningRate * m_gradient)
| |
| new_b &= b_current - (learningRate * b_gradient)
| |
| </pre>
| |
|
| |
| After each iteration, derivative is closer to zero. [http://blog.hackerearth.com/gradient-descent-algorithm-linear-regression Coding in R] for the simple linear regression.
| |
|
| |
| === Gradient descent vs Newton's method ===
| |
| * [https://stackoverflow.com/a/12066869 What is the difference between Gradient Descent and Newton's Gradient Descent?]
| |
| * [http://www.santanupattanayak.com/2017/12/19/newtons-method-vs-gradient-descent-method-in-tacking-saddle-points-in-non-convex-optimization/ Newton's Method vs Gradient Descent Method in tacking saddle points in Non-Convex Optimization]
| |
| * [https://dinh-hung-tu.github.io/gradient-descent-vs-newton-method/ Gradient Descent vs Newton Method]
| |
|
| |
| == Classification and Regression Trees (CART) ==
| |
| === Construction of the tree classifier ===
| |
| * Node proportion
| |
| :<math> p(1|t) + \dots + p(6|t) =1 </math> where <math>p(j|t)</math> define the node proportions (class proportion of class ''j'' on node ''t''. Here we assume there are 6 classes.
| |
| * Impurity of node t
| |
| :<math>i(t)</math> is a nonnegative function <math>\phi</math> of the <math>p(1|t), \dots, p(6|t)</math> such that <math> \phi(1/6,1/6,\dots,1/6)</math> = maximumm <math>\phi(1,0,\dots,0)=0, \phi(0,1,0,\dots,0)=0, \dots, \phi(0,0,0,0,0,1)=0</math>. That is, the node impurity is largest when all classes are equally mixed together in it, and smallest when the node contains only one class.
| |
| * Gini index of impurity
| |
| :<math>i(t) = - \sum_{j=1}^6 p(j|t) \log p(j|t).</math>
| |
| * Goodness of the split s on node t
| |
| :<math>\Delta i(s, t) = i(t) -p_Li(t_L) - p_Ri(t_R). </math> where <math>p_R</math> are the proportion of the cases in t go into the left node <math>t_L</math> and a proportion <math>p_R</math> go into right node <math>t_R</math>.
| |
| A tree was grown in the following way: At the root node <math>t_1</math>, a search was made through all candidate splits to find that split <math>s^*</math> which gave the largest decrease in impurity;
| |
| :<math>\Delta i(s^*, t_1) = \max_{s} \Delta i(s, t_1).</math>
| |
| * Class character of a terminal node was determined by the plurality rule. Specifically, if <math>p(j_0|t)=\max_j p(j|t)</math>, then ''t'' was designated as a class <math>j_0</math> terminal node.
| |
|
| |
| === R packages ===
| |
| * [http://cran.r-project.org/web/packages/rpart/vignettes/longintro.pdf rpart]
| |
| * http://exploringdatablog.blogspot.com/2013/04/classification-tree-models.html
| |
|
| |
| == Partially additive (generalized) linear model trees ==
| |
| * https://eeecon.uibk.ac.at/~zeileis/news/palmtree/
| |
| * https://cran.r-project.org/web/packages/palmtree/index.html
| |
|
| |
| == Supervised Classification, Logistic and Multinomial ==
| |
| * http://freakonometrics.hypotheses.org/19230
| |
|
| |
| == Variable selection ==
| |
| === Review ===
| |
| [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5969114/ Variable selection – A review and recommendations for the practicing statistician] by Heinze et al 2018.
| |
|
| |
| === Variable selection and variable importance plot ===
| |
| * http://freakonometrics.hypotheses.org/19835
| |
|
| |
| === Variable selection and cross-validation ===
| |
| * http://freakonometrics.hypotheses.org/19925
| |
| * http://ellisp.github.io/blog/2016/06/05/bootstrap-cv-strategies/
| |
|
| |
| === Mallow ''C<sub>p</sub>'' ===
| |
| Mallows's ''C<sub>p</sub>'' addresses the issue of overfitting. The Cp statistic calculated on a sample of data estimates the '''mean squared prediction error (MSPE)'''.
| |
| :<math>
| |
| E\sum_j (\hat{Y}_j - E(Y_j\mid X_j))^2/\sigma^2,
| |
| </math>
| |
| The ''C<sub>p</sub>'' statistic is defined as
| |
| :<math> C_p={SSE_p \over S^2} - N + 2P. </math>
| |
|
| |
| * https://en.wikipedia.org/wiki/Mallows%27s_Cp
| |
| * Used in Yuan & Lin (2006) group lasso. The degrees of freedom is estimated by the bootstrap or perturbation methods. Their paper mentioned the performance is comparable with that of 5-fold CV but is computationally much faster.
| |
|
| |
| === Variable selection for mode regression ===
| |
| http://www.tandfonline.com/doi/full/10.1080/02664763.2017.1342781 Chen & Zhou, Journal of applied statistics ,June 2017
| |
|
| |
| == Neural network ==
| |
| * [http://junma5.weebly.com/data-blog/build-your-own-neural-network-classifier-in-r Build your own neural network in R]
| |
| * (Video) [https://youtu.be/ntKn5TPHHAk 10.2: Neural Networks: Perceptron Part 1 - The Nature of Code] from the Coding Train. The book [http://natureofcode.com/book/chapter-10-neural-networks/ THE NATURE OF CODE] by DANIEL SHIFFMAN
| |
| * [https://freakonometrics.hypotheses.org/52774 CLASSIFICATION FROM SCRATCH, NEURAL NETS]. The ROCR package was used to produce the ROC curve.
| |
|
| |
| == Support vector machine (SVM) ==
| |
| * [https://statcompute.wordpress.com/2016/03/19/improve-svm-tuning-through-parallelism/ Improve SVM tuning through parallelism] by using the '''foreach''' and '''doParallel''' packages.
| |
|
| |
| == Quadratic Discriminant Analysis (qda), KNN ==
| |
| [https://datarvalue.blogspot.com/2017/05/machine-learning-stock-market-data-part_16.html Machine Learning. Stock Market Data, Part 3: Quadratic Discriminant Analysis and KNN]
| |
|
| |
| == [https://en.wikipedia.org/wiki/Regularization_(mathematics) Regularization] ==
| |
| Regularization is a process of introducing additional information in order to solve an ill-posed problem or to prevent overfitting
| |
|
| |
| === Ridge regression ===
| |
| * [https://stats.stackexchange.com/questions/52653/what-is-ridge-regression What is ridge regression?]
| |
| * [https://stats.stackexchange.com/questions/118712/why-does-ridge-estimate-become-better-than-ols-by-adding-a-constant-to-the-diago Why does ridge estimate become better than OLS by adding a constant to the diagonal?] The estimates become more stable if the covariates are highly correlated.
| |
| * (In ridge regression) the matrix we need to invert no longer has determinant near zero, so the solution does not lead to uncomfortably large variance in the estimated parameters. And that’s a good thing. See [https://tamino.wordpress.com/2011/02/12/ridge-regression/ this post].
| |
| * [https://www.tandfonline.com/doi/abs/10.1080/02664763.2018.1526891?journalCode=cjas20 Multicolinearity and ridge regression: results on type I errors, power and heteroscedasticity]
| |
|
| |
| Since L2 norm is used in the regularization, ridge regression is also called L2 regularization.
| |
|
| |
| [https://drsimonj.svbtle.com/ridge-regression-with-glmnet ridge regression with glmnet]
| |
|
| |
| Hoerl and Kennard (1970a, 1970b) introduced ridge regression, which minimizes RSS subject to a constraint <math>\sum|\beta_j|^2 \le t</math>. Note that though ridge regression shrinks the OLS estimator toward 0 and yields a biased estimator <math>\hat{\beta} = (X^TX + \lambda X)^{-1} X^T y </math> where <math>\lambda=\lambda(t)</math>, a function of ''t'', the variance is smaller than that of the OLS estimator.
| |
|
| |
| The solution exists if <math>\lambda >0</math> even if <math>n < p </math>.
| |
|
| |
| Ridge regression (L2 penalty) only shrinks the coefficients. In contrast, Lasso method (L1 penalty) tries to shrink some coefficient estimators to exactly zeros. This can be seen from comparing the coefficient path plot from both methods.
| |
|
| |
| Geometrically (contour plot of the cost function), the L1 penalty (the sum of absolute values of coefficients) will incur a probability of some zero coefficients (i.e. some coefficient hitting the corner of a diamond shape in the 2D case). For example, in the 2D case (X-axis=<math>\beta_0</math>, Y-axis=<math>\beta_1</math>), the shape of the L1 penalty <math>|\beta_0| + |\beta_1|</math> is a diamond shape whereas the shape of the L2 penalty (<math>\beta_0^2 + \beta_1^2</math>) is a circle.
| |
|
| |
| === Lasso/glmnet, adaptive lasso and FAQs ===
| |
| * https://en.wikipedia.org/wiki/Lasso_(statistics). It has a discussion when two covariates are highly correlated. For example if gene <math>i</math> and gene <math>j</math> are identical, then the values of <math>\beta _{j}</math> and <math>\beta _{k}</math> that minimize the lasso objective function are not uniquely determined. Elastic Net has been designed to address this shortcoming.
| |
| ** Strongly correlated covariates have similar regression coefficients, is referred to as the '''grouping''' effect. From the wikipedia page ''"one would like to find all the associated covariates, rather than selecting only one from each set of strongly correlated covariates, as lasso often does. In addition, selecting only a single covariate from each group will typically result in increased prediction error, since the model is less robust (which is why ridge regression often outperforms lasso)"''.
| |
|
| |
| ** The larger the <math>\lambda</math>, more coefficients are becoming zeros (think about '''coefficient path''' plots) and thus the simpler (more '''regularized''') the model.
| |
| ** If <math>\lambda</math> becomes zero, it reduces to the regular regression and if <math>\lambda</math> becomes infinity, the coefficients become zeros.
| |
| ** In terms of the bias-variance tradeoff, the larger the <math>\lambda</math>, the higher the bias and the lower the variance of the coefficient estimators.
| |
|
| |
| [[:File:Glmnetplot.svg]] [[:File:Glmnet path.svg]] [[:File:Glmnet l1norm.svg]]
| |
| : <syntaxhighlight lang='rsplus'>
| |
| set.seed(1010)
| |
| n=1000;p=100
| |
| nzc=trunc(p/10)
| |
| x=matrix(rnorm(n*p),n,p)
| |
| beta=rnorm(nzc)
| |
| fx= x[,seq(nzc)] %*% beta
| |
| eps=rnorm(n)*5
| |
| y=drop(fx+eps)
| |
| px=exp(fx)
| |
| px=px/(1+px)
| |
| ly=rbinom(n=length(px),prob=px,size=1)
| |
|
| |
| ## Full lasso
| |
| set.seed(999)
| |
| cv.full <- cv.glmnet(x, ly, family='binomial', alpha=1, parallel=TRUE)
| |
| plot(cv.full) # cross-validation curve and two lambda's
| |
| plot(glmnet(x, ly, family='binomial', alpha=1), xvar="lambda", label=TRUE) # coefficient path plot
| |
| plot(glmnet(x, ly, family='binomial', alpha=1)) # L1 norm plot
| |
| log(cv.full$lambda.min) # -4.546394
| |
| log(cv.full$lambda.1se) # -3.61605
| |
| sum(coef(cv.full, s=cv.full$lambda.min) != 0) # 44
| |
|
| |
| ## Ridge Regression to create the Adaptive Weights Vector
| |
| set.seed(999)
| |
| cv.ridge <- cv.glmnet(x, ly, family='binomial', alpha=0, parallel=TRUE)
| |
| wt <- 1/abs(matrix(coef(cv.ridge, s=cv.ridge$lambda.min)
| |
| [, 1][2:(ncol(x)+1)] ))^1 ## Using gamma = 1, exclude intercept
| |
| ## Adaptive Lasso using the 'penalty.factor' argument
| |
| set.seed(999)
| |
| cv.lasso <- cv.glmnet(x, ly, family='binomial', alpha=1, parallel=TRUE, penalty.factor=wt)
| |
| # defautl type.measure="deviance" for logistic regression
| |
| plot(cv.lasso)
| |
| log(cv.lasso$lambda.min) # -2.995375
| |
| log(cv.lasso$lambda.1se) # -0.7625655
| |
| sum(coef(cv.lasso, s=cv.lasso$lambda.min) != 0) # 34
| |
| </syntaxhighlight>
| |
| * A list of potential lambdas: see [http://web.stanford.edu/~hastie/glmnet/glmnet_alpha.html#lin Linear Regression] case. The λ sequence is determined by '''lambda.max''' and '''lambda.min.ratio'''. The latter (default is ifelse(nobs<nvars,0.01,0.0001)) is the ratio of smallest value of the generated λ sequence (say ''lambda.min'') to ''lambda.max''. The program then generated ''nlambda'' values linear on the log scale from ''lambda.max'' down to ''lambda.min''. ''lambda.max'' is not given, but easily computed from the input x and y; it is the smallest value for ''lambda'' such that all the coefficients are zero.
| |
| * [https://privefl.github.io/blog/choosing-hyper-parameters-in-penalized-regression/ Choosing hyper-parameters (α and λ) in penalized regression] by Florian Privé
| |
| * [https://stats.stackexchange.com/questions/70249/feature-selection-model-with-glmnet-on-methylation-data-pn lambda.min vs lambda.1se]
| |
| ** The '''lambda.1se''' represents the value of λ in the search that was simpler than the best model ('''lambda.min'''), but which has error within 1 standard error of the best model. In other words, using the value of ''lambda.1se'' as the selected value for λ results in a model that is slightly simpler than the best model but which cannot be distinguished from the best model in terms of error given the uncertainty in the k-fold CV estimate of the error of the best model.
| |
| ** The '''lambda.min''' option refers to value of λ at the lowest CV error. The error at this value of λ is the average of the errors over the k folds and hence this estimate of the error is uncertain.
| |
| * https://www.rdocumentation.org/packages/glmnet/versions/2.0-10/topics/glmnet
| |
| * [http://blog.revolutionanalytics.com/2016/11/glmnetutils.html glmnetUtils: quality of life enhancements for elastic net regression with glmnet]
| |
| * Mixing parameter: alpha=1 is the '''lasso penalty''', and alpha=0 the '''ridge penalty''' and anything between 0–1 is '''Elastic net'''.
| |
| ** RIdge regression uses Euclidean distance/L2-norm as the penalty. It won't remove any variables.
| |
| ** Lasso uses L1-norm as the penalty. Some of the coefficients may be shrunk exactly to zero.
| |
| * [https://www.quora.com/In-ridge-regression-and-lasso-what-is-lambda In ridge regression and lasso, what is lambda?]
| |
| ** Lambda is a penalty coefficient. Large lambda will shrink the coefficients.
| |
| ** cv.glment()$lambda.1se gives the most regularized model such that error is within one standard error of the minimum
| |
| * cv.glmnet() has a logical parameter '''parallel''' which is useful if a cluster or cores have been previously allocated
| |
| * [http://statweb.stanford.edu/~tibs/sta305files/Rudyregularization.pdf Ridge regression and the LASSO]
| |
| * Standard error/Confidence interval
| |
| ** [https://www.reddit.com/r/statistics/comments/1vg8k0/standard_errors_in_glmnet/ Standard Errors in GLMNET] and [https://stackoverflow.com/questions/39750965/confidence-intervals-for-ridge-regression Confidence intervals for Ridge regression]
| |
| ** '''[https://cran.r-project.org/web/packages/penalized/vignettes/penalized.pdf#page=18 Why SEs are not meaningful and are usually not provided in penalized regression?]'''
| |
| **# Hint: standard errors are not very meaningful for strongly biased estimates such as arise from penalized estimation methods.
| |
| **# '''Penalized estimation is a procedure that reduces the variance of estimators by introducing substantial bias.'''
| |
| **# The bias of each estimator is therefore a major component of its mean squared error, whereas its variance may contribute only a small part.
| |
| **# Any bootstrap-based calculations can only give an assessment of the variance of the estimates.
| |
| **# Reliable estimates of the bias are only available if reliable unbiased estimates are available, which is typically not the case in situations in which penalized estimates are used.
| |
| ** [https://stats.stackexchange.com/tags/glmnet/hot Hottest glmnet questions from stackexchange].
| |
| ** [https://stats.stackexchange.com/questions/91462/standard-errors-for-lasso-prediction-using-r Standard errors for lasso prediction]. There might not be a consensus on a statistically valid method of calculating standard errors for the lasso predictions.
| |
| ** [https://www4.stat.ncsu.edu/~lu/programcodes.html Code] for Adaptive-Lasso for Cox's proportional hazards model by Zhang & Lu (2007). This can compute the SE of estimates. The weights are originally based on the maximizers of the log partial likelihood. However, the beta may not be estimable in cases such as high-dimensional gene data, or the beta may be unstable if strong collinearity exists among covariates. In such cases, robust estimators such as ridge regression estimators would be used to determine the weights.
| |
| * LASSO vs Least angle regression
| |
| ** https://web.stanford.edu/~hastie/Papers/LARS/LeastAngle_2002.pdf
| |
| ** [http://web.stanford.edu/~hastie/TALKS/larstalk.pdf Least Angle Regression, Forward Stagewise and the Lasso]
| |
| ** https://www.quora.com/What-is-Least-Angle-Regression-and-when-should-it-be-used
| |
| ** [http://statweb.stanford.edu/~tibs/lasso/simple.html A simple explanation of the Lasso and Least Angle Regression]
| |
| ** https://stats.stackexchange.com/questions/4663/least-angle-regression-vs-lasso
| |
| ** https://cran.r-project.org/web/packages/lars/index.html
| |
| * '''Oracle property''' and '''adaptive lasso'''
| |
| ** [http://www.stat.wisc.edu/~shao/stat992/fan-li2001.pdf Variable Selection via Nonconcave Penalized Likelihood and Its Oracle Properties], Fan & Li (2001) JASA
| |
| ** [http://ricardoscr.github.io/how-to-adaptive-lasso.html Adaptive Lasso: What it is and how to implement in R]. Adaptive lasso weeks to minimize <math> RSS(\beta) + \lambda \sum_1^p \hat{\omega}_j |\beta_j| </math> where <math>\lambda</math> is the tuning parameter, <math>\hat{\omega}_j = \frac{1}{(|\hat{\beta}_j^{ini}|)^\gamma}</math> is the adaptive weights vector and <math>\hat{\beta}_j^{ini}</math> is an initial estimate of the coefficients obtained through ridge regression. '''Adaptive Lasso ends up penalizing more those coefficients with lower initial estimates.''' <math>\gamma</math> is a positive constant for adjustment of the adaptive weight vector, and the authors suggest the possible values of 0.5, 1 and 2.
| |
| ** When n goes to infinity, <math>\hat{\omega}_j |\beta_j| </math> converges to <math>I(\beta_j \neq 0) </math>. So the adaptive Lasso procedure can be regarded as an automatic implementation of best-subset selection in some asymptotic sense.
| |
| ** [https://stats.stackexchange.com/questions/229142/what-is-the-oracle-property-of-an-estimator What is the oracle property of an estimator?] An oracle estimator must be consistent in 1) '''variable selection''' and 2) '''consistent parameter estimation'''.
| |
| ** (Linear regression) The adaptive lasso and its oracle properties Zou (2006, JASA)
| |
| ** (Cox model) Adaptive-LASSO for Cox's proportional hazard model by Zhang and Lu (2007, Biometrika)
| |
| **[https://insightr.wordpress.com/2017/06/14/when-the-lasso-fails/ When the LASSO fails???]. Adaptive lasso is used to demonstrate its usefulness.
| |
| * [https://statisticaloddsandends.wordpress.com/2018/11/13/a-deep-dive-into-glmnet-penalty-factor/ A deep dive into glmnet: penalty.factor], [https://statisticaloddsandends.wordpress.com/2018/11/15/a-deep-dive-into-glmnet-standardize/ standardize], [https://statisticaloddsandends.wordpress.com/2019/01/09/a-deep-dive-into-glmnet-offset/ offset]
| |
| ** Lambda sequence is not affected by the "penalty.factor"
| |
| ** How "penalty.factor" used by the objective function may need to be corrected
| |
| * Some issues:
| |
| ** With group of highly correlated features, Lasso tends to select amongst them arbitrarily.
| |
| ** Often empirically ridge has better predictive performance than lasso but lasso leads to sparser solution
| |
| ** Elastic-net (Zou & Hastie '05) aims to address these issues: hybrid between Lasso and ridge regression, uses L1 and L2 penalties.
| |
| * [https://statcompute.wordpress.com/2019/02/23/gradient-free-optimization-for-glmnet-parameters/ Gradient-Free Optimization for GLMNET Parameters]
| |
| * [https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-019-2656-1 Gsslasso Cox]: a Bayesian hierarchical model for predicting survival and detecting associated genes by incorporating pathway information, Tang et al BMC Bioinformatics 2019
| |
|
| |
| === Lasso logistic regression ===
| |
| https://freakonometrics.hypotheses.org/52894
| |
|
| |
| === Lagrange Multipliers ===
| |
| [https://medium.com/@andrew.chamberlain/a-simple-explanation-of-why-lagrange-multipliers-works-253e2cdcbf74 A Simple Explanation of Why Lagrange Multipliers Works]
| |
|
| |
| === How to solve lasso/convex optimization ===
| |
| * [https://web.stanford.edu/~boyd/cvxbook/bv_cvxbook.pdf Convex Optimization] by Boyd S, Vandenberghe L, Cambridge 2004. It is cited by Zhang & Lu (2007). The '''interior point algorithm''' can be used to solve the optimization problem in adaptive lasso.
| |
| * Review of '''gradient descent''':
| |
| ** Finding maximum: <math>w^{(t+1)} = w^{(t)} + \eta \frac{dg(w)}{dw}</math>, where <math>\eta</math> is stepsize.
| |
| ** Finding minimum: <math>w^{(t+1)} = w^{(t)} - \eta \frac{dg(w)}{dw}</math>.
| |
| ** [https://stackoverflow.com/questions/12066761/what-is-the-difference-between-gradient-descent-and-newtons-gradient-descent What is the difference between Gradient Descent and Newton's Gradient Descent?] Newton's method requires <math>g''(w)</math>, more smoothness of g(.).
| |
| ** Finding minimum for multiple variables ('''gradient descent'''): <math>w^{(t+1)} = w^{(t)} - \eta \Delta g(w^{(t)})</math>. For the least squares problem, <math>g(w) = RSS(w)</math>.
| |
| ** Finding minimum for multiple variables in the least squares problem (minimize <math>RSS(w)</math>): <math>\text{partial}(j) = -2\sum h_j(x_i)(y_i - \hat{y}_i(w^{(t)}), w_j^{(t+1)} = w_j^{(t)} - \eta \; \text{partial}(j)</math>
| |
| ** Finding minimum for multiple variables in the ridge regression problem (minimize <math>RSS(w)+\lambda ||w||_2^2=(y-Hw)'(y-Hw)+\lambda w'w</math>): <math>\text{partial}(j) = -2\sum h_j(x_i)(y_i - \hat{y}_i(w^{(t)}), w_j^{(t+1)} = (1-2\eta \lambda) w_j^{(t)} - \eta \; \text{partial}(j)</math>. Compared to the closed form approach: <math>\hat{w} = (H'H + \lambda I)^{-1}H'y</math> where 1. the inverse exists even N<D as long as <math>\lambda > 0</math> and 2. the complexity of inverse is <math>O(D^3)</math>, D is the dimension of the covariates.
| |
| * '''Cyclical coordinate descent''' was used ([https://cran.r-project.org/web/packages/glmnet/vignettes/glmnet_beta.pdf#page=1 vignette]) in the glmnet package. See also '''[https://en.wikipedia.org/wiki/Coordinate_descent coordinate descent]'''. The reason we call it 'descent' is because we want to 'minimize' an objective function.
| |
| ** <math>\hat{w}_j = \min_w g(\hat{w}_1, \cdots, \hat{w}_{j-1},w, \hat{w}_{j+1}, \cdots, \hat{w}_D)</math>
| |
| ** See [https://www.jstatsoft.org/article/view/v033i01 paper] on JSS 2010. The Cox PHM case also uses the cyclical coordinate descent method; see the [https://www.jstatsoft.org/article/view/v039i05 paper] on JSS 2011.
| |
| ** Coursera's [https://www.coursera.org/learn/ml-regression/lecture/rb179/feature-selection-lasso-and-nearest-neighbor-regression Machine learning course 2: Regression] at 1:42. [http://web.stanford.edu/~hastie/TALKS/CD.pdf#page=12 Soft-thresholding] the coefficients is the key for the L1 penalty. The range for the thresholding is controlled by <math>\lambda</math>. Note to view the videos and all materials in coursera we can enroll to audit the course without starting a trial.
| |
| ** No step size is required as in gradient descent.
| |
| ** [https://sandipanweb.wordpress.com/2017/05/04/implementing-lasso-regression-with-coordinate-descent-and-the-sub-gradient-of-the-l1-penalty-with-soft-thresholding/ Implementing LASSO Regression with Coordinate Descent, Sub-Gradient of the L1 Penalty and Soft Thresholding in Python]
| |
| ** Coordinate descent in the least squares problem: <math>\frac{\partial}{\partial w_j} RSS(w)= -2 \rho_j + 2 w_j</math>; i.e. <math>\hat{w}_j = \rho_j</math>.
| |
| ** Coordinate descent in the Lasso problem (for normalized features): <math>
| |
| \hat{w}_j =
| |
| \begin{cases}
| |
| \rho_j + \lambda/2, & \text{if }\rho_j < -\lambda/2 \\
| |
| 0, & \text{if } -\lambda/2 \le \rho_j \le \lambda/2\\
| |
| \rho_j- \lambda/2, & \text{if }\rho_j > \lambda/2
| |
| \end{cases}
| |
| </math>
| |
| ** Choosing <math>\lambda</math> via cross validation tends to favor less sparse solutions and thus smaller <math>\lambda</math> then optimal choice for feature selection. See "Machine learning: a probabilistic perspective", Murphy 2012.
| |
| * Classical: Least angle regression (LARS) Efron et al 2004.
| |
| * [https://www.mathworks.com/help/stats/lasso.html?s_tid=gn_loc_drop Alternating Direction Method of Multipliers (ADMM)]. Boyd, 2011. “Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers.” Foundations and Trends in Machine Learning. Vol. 3, No. 1, 2010, pp. 1–122.
| |
| ** https://stanford.edu/~boyd/papers/pdf/admm_slides.pdf
| |
| ** [https://cran.r-project.org/web/packages/ADMM/ ADMM] package
| |
| ** [https://www.quora.com/Convex-Optimization-Whats-the-advantage-of-alternating-direction-method-of-multipliers-ADMM-and-whats-the-use-case-for-this-type-of-method-compared-against-classic-gradient-descent-or-conjugate-gradient-descent-method What's the advantage of alternating direction method of multipliers (ADMM), and what's the use case for this type of method compared against classic gradient descent or conjugate gradient descent method?]
| |
| * [https://math.stackexchange.com/questions/771585/convexity-of-lasso If some variables in design matrix are correlated, then LASSO is convex or not?]
| |
| * Tibshirani. [http://www.jstor.org/stable/2346178 Regression shrinkage and selection via the lasso] (free). JRSS B 1996.
| |
| * [http://www.econ.uiuc.edu/~roger/research/conopt/coptr.pdf Convex Optimization in R] by Koenker & Mizera 2014.
| |
| * [https://web.stanford.edu/~hastie/Papers/pathwise.pdf Pathwise coordinate optimization] by Friedman et al 2007.
| |
| * [http://web.stanford.edu/~hastie/StatLearnSparsity/ Statistical learning with sparsity: the Lasso and generalizations] T. Hastie, R. Tibshirani, and M. Wainwright, 2015 (book)
| |
| * Element of Statistical Learning (book)
| |
| * https://youtu.be/A5I1G1MfUmA StatsLearning Lect8h 110913
| |
| * Fu's (1998) shooting algorithm for Lasso ([http://web.stanford.edu/~hastie/TALKS/CD.pdf#page=11 mentioned] in the history of coordinate descent) and Zhang & Lu's (2007) modified shooting algorithm for adaptive Lasso.
| |
| * [https://www.cs.ubc.ca/~murphyk/MLbook/ Machine Learning: a Probabilistic Perspective] Choosing <math>\lambda</math> via cross validation tends to favor less sparse solutions and thus smaller <math>\lambda</math> than optimal choice for feature selection.
| |
|
| |
| === Quadratic programming ===
| |
| * https://en.wikipedia.org/wiki/Quadratic_programming
| |
| * https://en.wikipedia.org/wiki/Lasso_(statistics)
| |
| * [https://cran.r-project.org/web/views/Optimization.html CRAN Task View: Optimization and Mathematical Programming]
| |
| * [https://cran.r-project.org/web/packages/quadprog/ quadprog] package and [https://www.rdocumentation.org/packages/quadprog/versions/1.5-5/topics/solve.QP solve.QP()] function
| |
| * [https://rwalk.xyz/solving-quadratic-progams-with-rs-quadprog-package/ Solving Quadratic Progams with R’s quadprog package]
| |
| * [https://rwalk.xyz/more-on-quadratic-programming-in-r/ More on Quadratic Programming in R]
| |
| * https://optimization.mccormick.northwestern.edu/index.php/Quadratic_programming
| |
| * [https://rss.onlinelibrary.wiley.com/doi/full/10.1111/rssb.12273 Maximin projection learning for optimal treatment decision with heterogeneous individualized treatment effects] where the algorithm from [https://ieeexplore.ieee.org/abstract/document/7448814/ Lee] 2016 was used.
| |
|
| |
| === Highly correlated covariates ===
| |
| '''1. Elastic net'''
| |
|
| |
| ''' 2. Group lasso'''
| |
| * [http://pages.stat.wisc.edu/~myuan/papers/glasso.final.pdf Yuan and Lin 2006] JRSSB
| |
| * https://cran.r-project.org/web/packages/gglasso/, http://royr2.github.io/2014/04/15/GroupLasso.html
| |
| * https://cran.r-project.org/web/packages/grpreg/
| |
| * https://cran.r-project.org/web/packages/grplasso/ by Lukas Meier ([http://people.ee.duke.edu/~lcarin/lukas-sara-peter.pdf paper]), used in the '''biospear''' package for survival data
| |
| * https://cran.r-project.org/web/packages/SGL/index.html, http://royr2.github.io/2014/05/20/SparseGroupLasso.html, http://web.stanford.edu/~hastie/Papers/SGLpaper.pdf
| |
|
| |
| === Other Lasso ===
| |
| * [https://statisticaloddsandends.wordpress.com/2019/01/14/pclasso-a-new-method-for-sparse-regression/ pcLasso]
| |
| * [https://www.biorxiv.org/content/10.1101/630079v1 A Fast and Flexible Algorithm for Solving the Lasso in Large-scale and Ultrahigh-dimensional Problems] Qian et al 2019 and the [https://github.com/junyangq/snpnet snpnet] package
| |
|
| |
| == Comparison by plotting ==
| |
| If we are running simulation, we can use the [https://github.com/pbiecek/DALEX DALEX] package to visualize the fitting result from different machine learning methods and the true model. See http://smarterpoland.pl/index.php/2018/05/ml-models-what-they-cant-learn.
| |
|
| |
| == UMAP ==
| |
| * https://arxiv.org/abs/1802.03426
| |
| * https://www.biorxiv.org/content/early/2018/04/10/298430
| |
| * https://cran.r-project.org/web/packages/umap/index.html
| |
|
| |
| = Imbalanced Classification =
| |
| * [https://www.analyticsvidhya.com/blog/2016/03/practical-guide-deal-imbalanced-classification-problems/ Practical Guide to deal with Imbalanced Classification Problems in R]
| |
| * [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4349800/ The Precision-Recall Plot Is More Informative than the ROC Plot When Evaluating Binary Classifiers on Imbalanced Datasets]
| |
| * [https://github.com/dariyasydykova/open_projects/tree/master/ROC_animation Roc animation]
| |
|
| |
| = Deep Learning =
| |
| * [https://bcourses.berkeley.edu/courses/1453965/wiki CS294-129 Designing, Visualizing and Understanding Deep Neural Networks] from berkeley.
| |
| * https://www.youtube.com/playlist?list=PLkFD6_40KJIxopmdJF_CLNqG3QuDFHQUm
| |
| * [https://www.r-bloggers.com/deep-learning-from-first-principles-in-python-r-and-octave-part-5/ Deep Learning from first principles in Python, R and Octave – Part 5]
| |
|
| |
| == Tensor Flow (tensorflow package) ==
| |
| * https://tensorflow.rstudio.com/
| |
| * [https://youtu.be/atiYXm7JZv0 Machine Learning with R and TensorFlow] (Video)
| |
| * [https://developers.google.com/machine-learning/crash-course/ Machine Learning Crash Course] with TensorFlow APIs
| |
| * [http://www.pnas.org/content/early/2018/03/09/1717139115 Predicting cancer outcomes from histology and genomics using convolutional networks] Pooya Mobadersany et al, PNAS 2018
| |
|
| |
| == Biological applications ==
| |
| * [https://academic.oup.com/bioinformatics/article-abstract/33/22/3685/4092933 An introduction to deep learning on biological sequence data: examples and solutions]
| |
|
| |
| == Machine learning resources ==
| |
| * [https://www.makeuseof.com/tag/machine-learning-courses/ These Machine Learning Courses Will Prepare a Career Path for You]
| |
| * [https://blog.datasciencedojo.com/machine-learning-algorithms/ 101 Machine Learning Algorithms for Data Science with Cheat Sheets]
| |
|
| |
| = Randomization inference =
| |
| * Google: randomization inference in r
| |
| * [http://www.personal.psu.edu/ljk20/zeros.pdf Randomization Inference for Outcomes with Clumping at Zero], [https://amstat.tandfonline.com/doi/full/10.1080/00031305.2017.1385535#.W09zpdhKg3E The American Statistician] 2018
| |
| * [https://jasonkerwin.com/nonparibus/2017/09/25/randomization-inference-vs-bootstrapping-p-values/ Randomization inference vs. bootstrapping for p-values]
| |
|
| |
| = Bootstrap =
| |
| * [https://en.wikipedia.org/wiki/Bootstrapping_%28statistics%29 Bootstrap] from Wikipedia.
| |
| ** This contains an overview of different methods for computing bootstrap confidence intervals.
| |
| ** [https://www.rdocumentation.org/packages/boot/versions/1.3-20/topics/boot.ci boot.ci()] from the 'boot' package provides a short explanation for different methods for computing bootstrap confidence intervals.
| |
| * [https://github.com/jtleek/slipper Bootstrapping made easy and tidy with slipper]
| |
| * [https://cran.r-project.org/web/packages/bootstrap/ bootstrap] package. "An Introduction to the Bootstrap" by B. Efron and R. Tibshirani, 1993
| |
| * [https://cran.r-project.org/web/packages/boot/ boot] package. Functions and datasets for bootstrapping from the book [https://books.google.com/books?id=_uKcAgAAQBAJ Bootstrap Methods and Their Application] by A. C. Davison and D. V. Hinkley (1997, CUP). A short course material can be found [https://www.researchgate.net/publication/37434447_Bootstrap_Methods_and_Their_Application here].The main functions are '''boot()''' and '''boot.ci()'''.
| |
| ** https://www.rdocumentation.org/packages/boot/versions/1.3-20
| |
| ** [https://www.statmethods.net/advstats/bootstrapping.html R in Action] Nonparametric bootstrapping <syntaxhighlight lang='rsplus'>
| |
| # Compute the bootstrapped 95% confidence interval for R-squared in the linear regression
| |
| rsq <- function(data, indices, formula) {
| |
| d <- data[indices,] # allows boot to select sample
| |
| fit <- lm(formula, data=d)
| |
| return(summary(fit)$r.square)
| |
| } # 'formula' is optional depends on the problem
| |
|
| |
| # bootstrapping with 1000 replications
| |
| set.seed(1234)
| |
| bootobject <- boot(data=mtcars, statistic=rsq, R=1000,
| |
| formula=mpg~wt+disp)
| |
| plot(bootobject) # or plot(bootobject, index = 1) if we have multiple statistics
| |
| ci <- boot.ci(bootobject, conf = .95, type=c("perc", "bca") )
| |
| # default type is "all" which contains c("norm","basic", "stud", "perc", "bca").
| |
| # 'bca' (Bias Corrected and Accelerated) by Efron 1987 uses
| |
| # percentiles but adjusted to account for bias and skewness.
| |
| # Level Percentile BCa
| |
| # 95% ( 0.6838, 0.8833 ) ( 0.6344, 0.8549 )
| |
| # Calculations and Intervals on Original Scale
| |
| # Some BCa intervals may be unstable
| |
| ci$bca[4:5]
| |
| # [1] 0.6343589 0.8549305
| |
| # the mean is not the same
| |
| mean(c(0.6838, 0.8833 ))
| |
| # [1] 0.78355
| |
| mean(c(0.6344, 0.8549 ))
| |
| # [1] 0.74465
| |
| summary(lm(mpg~wt+disp, data = mtcars))$r.square
| |
| # [1] 0.7809306
| |
| </syntaxhighlight>
| |
| ** [https://www.r-project.org/doc/Rnews/Rnews_2002-3.pdf#page=2 Resampling Methods in R: The boot Package] by Canty
| |
| ** [https://pdfs.semanticscholar.org/0203/d0902185dd819bf38c8dacd077df0122b89d.pdf An introduction to bootstrap with applications with R] by Davison and Kuonen.
| |
| ** http://people.tamu.edu/~alawing/materials/ESSM689/Btutorial.pdf
| |
| ** http://statweb.stanford.edu/~tibs/sta305files/FoxOnBootingRegInR.pdf
| |
| ** http://www.stat.wisc.edu/~larget/stat302/chap3.pdf
| |
| ** https://www.stat.cmu.edu/~cshalizi/402/lectures/08-bootstrap/lecture-08.pdf. Variance, se, bias, confidence interval (basic, percentile), hypothesis testing, parametric & non-parametric bootstrap, bootstrapping regression models.
| |
| * http://www.math.ntu.edu.tw/~hchen/teaching/LargeSample/references/R-bootstrap.pdf No package is used
| |
| * http://web.as.uky.edu/statistics/users/pbreheny/621/F10/notes/9-21.pdf Bootstrap confidence interval
| |
| * http://www-stat.wharton.upenn.edu/~stine/research/spida_2005.pdf
| |
| * Optimism corrected bootstrapping ([https://www4.stat.ncsu.edu/~lu/ST745/sim_modelchecking.pdf#page=12 Harrell et al 1996])
| |
| ** [http://thestatsgeek.com/2014/10/04/adjusting-for-optimismoverfitting-in-measures-of-predictive-ability-using-bootstrapping/ Adjusting for optimism/overfitting in measures of predictive ability using bootstrapping]
| |
| ** [https://intobioinformatics.wordpress.com/2018/12/25/optimism-corrected-bootstrapping-a-problematic-method/ Part 1]: Optimism corrected bootstrapping: a problematic method
| |
| ** [https://intobioinformatics.wordpress.com/2018/12/26/part-2-optimism-corrected-bootstrapping-is-definitely-bias-further-evidence/ Part 2]: Optimism corrected bootstrapping is definitely bias, further evidence
| |
| ** [https://intobioinformatics.wordpress.com/2018/12/27/part-3-two-more-implementations-of-optimism-corrected-bootstrapping-show-shocking-bias/ Part 3]: Two more implementations of optimism corrected bootstrapping show shocking bias
| |
| ** [https://intobioinformatics.wordpress.com/2018/12/28/part-4-more-bias-and-why-does-bias-occur-in-optimism-corrected-bootstrapping/ Part 4]: Why does bias occur in optimism corrected bootstrapping?
| |
| ** [https://intobioinformatics.wordpress.com/2018/12/29/part-5-corrections-to-optimism-corrected-bootstrapping-series-but-it-still-is-problematic/ Part 5]: Code corrections to optimism corrected bootstrapping series
| |
|
| |
| == Nonparametric bootstrap ==
| |
| This is the most common bootstrap method
| |
|
| |
| [https://academic.oup.com/biostatistics/advance-article/doi/10.1093/biostatistics/kxy054/5106666 The upstrap] Crainiceanu & Crainiceanu, Biostatistics 2018
| |
|
| |
| == Parametric bootstrap ==
| |
| * Parametric bootstraps resample a known distribution function, whose parameters are estimated from your sample
| |
| * http://www.math.ntu.edu.tw/~hchen/teaching/LargeSample/notes/notebootstrap.pdf#page=3 No package is used
| |
| * [http://influentialpoints.com/Training/nonparametric-or-parametric_bootstrap.htm A parametric or non-parametric bootstrap?]
| |
| * https://www.stat.cmu.edu/~cshalizi/402/lectures/08-bootstrap/lecture-08.pdf#page=11
| |
| * [https://bioconductor.org/packages/release/bioc/vignettes/simulatorZ/inst/doc/simulatorZ-vignette.pdf simulatorZ] Bioc package
| |
|
| |
| = Cross Validation =
| |
| R packages:
| |
| * [https://cran.r-project.org/web/packages/rsample/index.html rsample] (released July 2017)
| |
| * [https://cran.r-project.org/web/packages/CrossValidate/index.html CrossValidate] (released July 2017)
| |
|
| |
| == Difference between CV & bootstrapping ==
| |
| [https://stats.stackexchange.com/a/18355 Differences between cross validation and bootstrapping to estimate the prediction error]
| |
| * CV tends to be less biased but K-fold CV has fairly large variance.
| |
| * Bootstrapping tends to drastically reduce the variance but gives more biased results (they tend to be pessimistic).
| |
| * The 632 and 632+ rules methods have been adapted to deal with the bootstrap bias
| |
| * Repeated CV does K-fold several times and averages the results similar to regular K-fold
| |
|
| |
| == .632 and .632+ bootstrap ==
| |
| * 0.632 bootstrap: Efron's paper [https://www.jstor.org/stable/pdf/2288636.pdf Estimating the Error Rate of a Prediction Rule: Improvement on Cross-Validation] in 1983.
| |
| * 0.632+ bootstrap: The CV estimate of prediction error is nearly unbiased but can be highly variable. See [https://www.tandfonline.com/doi/pdf/10.1080/01621459.1997.10474007 Improvements on Cross-Validation: The .632+ Bootstrap Method] by Efron and Tibshirani, JASA 1997.
| |
| * Chap 17.7 from "An Introduction to the Bootstrap" by Efron and Tibshirani. Chapman & Hall.
| |
| * Chap 7.4 (resubstitution error <math>\overline{err} </math>) and chap 7.11 (<math>Err_{boot(1)}</math>leave-one-out bootstrap estimate of prediction error) from "The Elements of Statistical Learning" by Hastie, Tibshirani and Friedman. Springer.
| |
| * [http://stats.stackexchange.com/questions/96739/what-is-the-632-rule-in-bootstrapping What is the .632 bootstrap]?
| |
| : <math>
| |
| Err_{.632} = 0.368 \overline{err} + 0.632 Err_{boot(1)}
| |
| </math>
| |
| * [https://link.springer.com/referenceworkentry/10.1007/978-1-4419-9863-7_1328 Bootstrap, 0.632 Bootstrap, 0.632+ Bootstrap] from Encyclopedia of Systems Biology by Springer.
| |
| * bootpred() from bootstrap function.
| |
| * The .632 bootstrap estimate can be extended to statistics other than prediction error. See the paper [https://www.tandfonline.com/doi/full/10.1080/10543406.2016.1226329 Issues in developing multivariable molecular signatures for guiding clinical care decisions] by Sachs. [https://github.com/sachsmc/signature-tutorial Source code]. Let <math>\phi</math> be a performance metric, <math>S_b</math> a sample of size n from a bootstrap, <math>S_{-b}</math> subset of <math>S</math> that is disjoint from <math>S_b</math>; test set.
| |
| : <math>
| |
| \hat{E}^*[\phi_{\mathcal{F}}(S)] = .368 \hat{E}[\phi_{f}(S)] + 0.632 \hat{E}[\phi_{f_b}(S_{-b})]
| |
| </math>
| |
| : where <math>\hat{E}[\phi_{f}(S)]</math> is the naive estimate of <math>\phi_f</math> using the entire dataset.
| |
| * For survival data
| |
| ** [https://cran.r-project.org/web/packages/ROC632/ ROC632] package, [https://repositorium.sdum.uminho.pt/bitstream/1822/52744/1/paper4_final_version_CatarinaSantos_ACB.pdf Overview], and the paper [https://www.degruyter.com/view/j/sagmb.2012.11.issue-6/1544-6115.1815/1544-6115.1815.xml?format=INT Time Dependent ROC Curves for the Estimation of True Prognostic Capacity of Microarray Data] by Founcher 2012.
| |
| ** [https://onlinelibrary.wiley.com/doi/full/10.1111/j.1541-0420.2007.00832.x Efron-Type Measures of Prediction Error for Survival Analysis] Gerds 2007.
| |
| ** [https://academic.oup.com/bioinformatics/article/23/14/1768/188061 Assessment of survival prediction models based on microarray data] Schumacher 2007. Brier score.
| |
| ** [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4194196/ Evaluating Random Forests for Survival Analysis using Prediction Error Curves] Mogensen, 2012. [https://cran.r-project.org/web/packages/pec/ pec] package
| |
| ** [https://bmcmedresmethodol.biomedcentral.com/articles/10.1186/1471-2288-12-102 Assessment of performance of survival prediction models for cancer prognosis] Chen 2012. Concordance, ROC... But bootstrap was not used.
| |
| ** [https://www.sciencedirect.com/science/article/pii/S1672022916300390#b0150 Comparison of Cox Model Methods in A Low-dimensional Setting with Few Events] 2016. Concordance, calibration slopes RMSE are considered.
| |
|
| |
| == Create partitions ==
| |
| [http://r-exercises.com/2016/11/13/sampling-exercise-1/ set.seed(), sample.split(),createDataPartition(), and createFolds()] functions.
| |
|
| |
| [https://drsimonj.svbtle.com/k-fold-cross-validation-with-modelr-and-broom k-fold cross validation with modelr and broom]
| |
|
| |
| == Nested resampling ==
| |
| * [http://appliedpredictivemodeling.com/blog/2017/9/2/njdc83d01pzysvvlgik02t5qnaljnd Nested Resampling with rsample]
| |
| * https://stats.stackexchange.com/questions/292179/whats-the-meaning-of-nested-resampling
| |
|
| |
| Nested resampling is need when we want to '''tuning a model''' by using a grid search. The default settings of a model are likely not optimal for each data set out. So an inner CV has to be performed with the aim to find the best parameter set of a learner for each fold.
| |
|
| |
| See a diagram at https://i.stack.imgur.com/vh1sZ.png
| |
|
| |
| In BRB-ArrayTools -> class prediction with multiple methods, the ''alpha'' (significant level of threshold used for gene selection, 2nd option in individual genes) can be viewed as a tuning parameter for the development of a classifier.
| |
|
| |
| == Pre-validation ==
| |
| * [https://www.degruyter.com/view/j/sagmb.2002.1.1/sagmb.2002.1.1.1000/sagmb.2002.1.1.1000.xml Pre-validation and inference in microarrays] Tibshirani and Efron, Statistical Applications in Genetics and Molecular Biology, 2002.
| |
| * http://www.stat.columbia.edu/~tzheng/teaching/genetics/papers/tib_efron.pdf#page=5. In each CV, we compute the estimate of the response. This estimate of the response will serve as a new predictor ('''pre-validated predictor''') in the final fitting model.
| |
| * P1101 of Sachs 2016. With pre-validation, instead of computing the statistic <math>\phi</math> for each of the held-out subsets (<math>S_{-b}</math> for the bootstrap or <math>S_{k}</math> for cross-validation), the fitted signature <math>\hat{f}(X_i)</math> is estimated for <math>X_i \in S_{-b}</math> where <math>\hat{f}</math> is estimated using <math>S_{b}</math>. This process is repeated to obtain a set of '''pre-validated signature''' estimates <math>\hat{f}</math>. Then an association measure <math>\phi</math> can be calculated using the pre-validated signature estimates and the true outcomes <math>Y_i, i = 1, \ldots, n</math>.
| |
| * In CV, left-out samples = hold-out cases = test set
| |
|
| |
| = Clustering =
| |
| See [[Heatmap#Clustering|Clustering]].
| |
|
| |
| = Mixed Effect Model =
| |
|
| |
| * Paper by [http://www.stat.cmu.edu/~brian/463/week07/laird-ware-biometrics-1982.pdf Laird and Ware 1982]
| |
| * [http://cran.r-project.org/doc/contrib/Fox-Companion/appendix-mixed-models.pdf John Fox's Linear Mixed Models] Appendix to An R and S-PLUS Companion to Applied Regression. Very clear. It provides 2 typical examples (hierarchical data and longitudinal data) of using the mixed effects model. It also uses Trellis plots to examine the data.
| |
| * Chapter 10 Random and Mixed Effects from Modern Applied Statistics with S by Venables and Ripley.
| |
| * (Book) lme4: Mixed-effects modeling with R by Douglas Bates.
| |
| * (Book) Mixed-effects modeling in S and S-Plus by José Pinheiro and Douglas Bates.
| |
| * [http://educate-r.org//2016/06/29/user2016.html Simulation and power analysis of generalized linear mixed models]
| |
| * [https://poissonisfish.wordpress.com/2017/12/11/linear-mixed-effect-models-in-r/ Linear mixed-effect models in R] by poissonisfish
| |
| * [https://www.statforbiology.com/2019/stat_general_correlationindependence2/ Dealing with correlation in designed field experiments]: part II
| |
|
| |
| = Model selection criteria =
| |
| * [http://r-video-tutorial.blogspot.com/2017/07/assessing-accuracy-of-our-models-r.html Assessing the Accuracy of our models (R Squared, Adjusted R Squared, RMSE, MAE, AIC)]
| |
| * [https://forecasting.svetunkov.ru/en/2018/03/22/comparing-additive-and-multiplicative-regressions-using-aic-in-r/ Comparing additive and multiplicative regressions using AIC in R]
| |
| * [https://www.tandfonline.com/doi/full/10.1080/00031305.2018.1459316?src=recsys Model Selection and Regression t-Statistics] Derryberry 2019
| |
|
| |
| == Akaike information criterion/AIC ==
| |
| * https://en.wikipedia.org/wiki/Akaike_information_criterion.
| |
| :<math>\mathrm{AIC} \, = \, 2k - 2\ln(\hat L)</math>, where k be the number of estimated parameters in the model.
| |
| * Smaller is better
| |
| * Akaike proposed to approximate the expectation of the cross-validated log likelihood <math>E_{test}E_{train} [log L(x_{test}| \hat{\beta}_{train})]</math> by <math>log L(x_{train} | \hat{\beta}_{train})-k </math>.
| |
| * Leave-one-out cross-validation is asymptotically equivalent to AIC, for ordinary linear regression models.
| |
| * AIC can be used to compare two models even if they are not hierarchically nested.
| |
| * [https://www.rdocumentation.org/packages/stats/versions/3.6.0/topics/AIC AIC()] from the stats package.
| |
|
| |
| == BIC ==
| |
| :<math>\mathrm{BIC} \, = \, \ln(n) \cdot 2k - 2\ln(\hat L)</math>, where k be the number of estimated parameters in the model.
| |
|
| |
| == Overfitting ==
| |
| [https://stats.stackexchange.com/questions/81576/how-to-judge-if-a-supervised-machine-learning-model-is-overfitting-or-not How to judge if a supervised machine learning model is overfitting or not?]
| |
|
| |
| == AIC vs AUC ==
| |
| [https://stats.stackexchange.com/a/51278 What is the difference in what AIC and c-statistic (AUC) actually measure for model fit?]
| |
|
| |
| Roughly speaking:
| |
| * AIC is telling you how good your model fits for a specific mis-classification cost.
| |
| * AUC is telling you how good your model would work, on average, across all mis-classification costs.
| |
|
| |
| '''Frank Harrell''': AUC (C-index) has the advantage of measuring the concordance probability as you stated, aside from cost/utility considerations. To me the bottom line is the AUC should be used to describe discrimination of one model, not to compare 2 models. For comparison we need to use the most powerful measure: deviance and those things derived from deviance: generalized 𝑅<sup>2</sup> and AIC.
| |
|
| |
| == Variable selection and model estimation ==
| |
| [https://stats.stackexchange.com/a/138475 Proper variable selection: Use only training data or full data?]
| |
|
| |
| * training observations to perform all aspects of model-fitting—including variable selection
| |
| * make use of the full data set in order to obtain more accurate coefficient estimates (This statement is arguable)
| |
|
| |
| = Entropy =
| |
| == Definition ==
| |
| Entropy is defined by -log2(p) where p is a probability. '''Higher entropy represents higher unpredictable of an event'''.
| |
|
| |
| Some examples:
| |
| * Fair 2-side die: Entropy = -.5*log2(.5) - .5*log2(.5) = 1.
| |
| * Fair 6-side die: Entropy = -6*1/6*log2(1/6) = 2.58
| |
| * Weighted 6-side die: Consider pi=.1 for i=1,..,5 and p6=.5. Entropy = -5*.1*log2(.1) - .5*log2(.5) = 2.16 (less unpredictable than a fair 6-side die).
| |
|
| |
| == Use ==
| |
| When entropy was applied to the variable selection, we want to select a class variable which gives a largest entropy difference between without any class variable (compute entropy using response only) and with that class variable (entropy is computed by adding entropy in each class level) because this variable is most discriminative and it gives most '''information gain'''. For example,
| |
| * entropy (without any class)=.94,
| |
| * entropy(var 1) = .69,
| |
| * entropy(var 2)=.91,
| |
| * entropy(var 3)=.725.
| |
| We will choose variable 1 since it gives the largest gain (.94 - .69) compared to the other variables (.94 -.91, .94 -.725).
| |
|
| |
| Why is picking the attribute with the most information gain beneficial? It ''reduces'' entropy, which increases predictability. A decrease in entropy signifies an decrease in unpredictability, which also means an increase in predictability.
| |
|
| |
| Consider a split of a continuous variable. Where should we cut the continuous variable to create a binary partition with the highest gain? Suppose cut point c1 creates an entropy .9 and another cut point c2 creates an entropy .1. We should choose c2.
| |
|
| |
| == Related ==
| |
| In addition to information gain, gini (dʒiːni) index is another metric used in decision tree. See [http://en.wikipedia.org/wiki/Decision_tree_learning wikipedia page] about decision tree learning.
| |
|
| |
| = Ensembles =
| |
| * Combining classifiers. Pro: better classification performance. Con: time consuming.
| |
| * Comic http://flowingdata.com/2017/09/05/xkcd-ensemble-model/
| |
| * [http://www.win-vector.com/blog/2019/07/common-ensemble-models-can-be-biased/ Common Ensemble Models can be Biased]
| |
|
| |
| == Bagging ==
| |
| Draw N bootstrap samples and summary the results (averaging for regression problem, majority vote for classification problem). Decrease variance without changing bias. Not help much with underfit or high bias models.
| |
|
| |
| === Random forest ===
| |
| '''Variance importance''': if you scramble the values of a variable, and the accuracy of your tree does not change much, then the variable is not very important.
| |
|
| |
| Why is it useful to compute variance importance? So the model's predictions are easier to interpret (not improve the prediction performance).
| |
|
| |
| Random forest has advantages of easier to run in parallel and suitable for small n large p problems.
| |
|
| |
| [https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-018-2264-5 Random forest versus logistic regression: a large-scale benchmark experiment] by Raphael Couronné, BMC Bioinformatics 2018
| |
|
| |
| [https://github.com/suiji/arborist Arborist]: Parallelized, Extensible Random Forests
| |
|
| |
| [https://academic.oup.com/bioinformatics/article-abstract/35/15/2701/5250706?redirectedFrom=fulltext On what to permute in test-based approaches for variable importance measures in Random Forests]
| |
|
| |
| == Boosting ==
| |
| Instead of selecting data points randomly with the boostrap, it favors the misclassified points.
| |
|
| |
| Algorithm:
| |
| * Initialize the weights
| |
| * Repeat
| |
| ** resample with respect to weights
| |
| ** retrain the model
| |
| ** recompute weights
| |
|
| |
| Since boosting requires computation in iterative and bagging can be run in parallel, bagging has an advantage over boosting when the data is very large.
| |
|
| |
| == Time series ==
| |
| [https://petolau.github.io/Ensemble-of-trees-for-forecasting-time-series/ Ensemble learning for time series forecasting in R]
| |
|
| |
| = p-values =
| |
| == p-values ==
| |
| * Prob(Data | H0)
| |
| * https://en.wikipedia.org/wiki/P-value
| |
| * [https://amstat.tandfonline.com/toc/utas20/73/sup1 Statistical Inference in the 21st Century: A World Beyond p < 0.05] The American Statistician, 2019
| |
| * [https://matloff.wordpress.com/2016/03/07/after-150-years-the-asa-says-no-to-p-values/ THE ASA SAYS NO TO P-VALUES] The problem is that with large samples, significance tests pounce on tiny, unimportant departures from the null hypothesis. We have the opposite problem with small samples: The power of the test is low, and we will announce that there is “no significant effect” when in fact we may have too little data to know whether the effect is important.
| |
| * [http://www.r-statistics.com/2016/03/its-not-the-p-values-fault-reflections-on-the-recent-asa-statement/ It’s not the p-values’ fault]
| |
| * [https://stablemarkets.wordpress.com/2016/05/21/exploring-p-values-with-simulations-in-r/ Exploring P-values with Simulations in R] from Stable Markets.
| |
| * p-value and [https://en.wikipedia.org/wiki/Effect_size effect size]. http://journals.sagepub.com/doi/full/10.1177/1745691614553988
| |
|
| |
| == Distribution of p values in medical abstracts ==
| |
| * http://www.ncbi.nlm.nih.gov/pubmed/26608725
| |
| * [https://github.com/jtleek/tidypvals An R package with several million published p-values in tidy data sets] by Jeff Leek.
| |
|
| |
| == nominal p-value and Empirical p-values ==
| |
| * Nominal p-values are based on asymptotic null distributions
| |
| * Empirical p-values are computed from simulations/permutations
| |
|
| |
| == (nominal) alpha level ==
| |
| Conventional methodology for statistical testing is, in advance of undertaking the test, to set a NOMINAL ALPHA CRITERION LEVEL (often 0.05). The outcome is classified as showing STATISTICAL SIGNIFICANCE if the actual ALPHA (probability of the outcome under the null hypothesis) is no greater than this NOMINAL ALPHA CRITERION LEVEL.
| |
| * http://www.translationdirectory.com/glossaries/glossary033.htm
| |
| * http://courses.washington.edu/p209s07/lecturenotes/Week%205_Monday%20overheads.pdf
| |
|
| |
| == Normality assumption ==
| |
| [https://www.biorxiv.org/content/early/2018/12/20/498931 Violating the normality assumption may be the lesser of two evils]
| |
|
| |
| = T-statistic =
| |
| See [[T-test#T-statistic|T-statistic]].
| |
|
| |
| = ANOVA =
| |
| See [[T-test#ANOVA|ANOVA]].
| |
|
| |
| = [https://en.wikipedia.org/wiki/Goodness_of_fit Goodness of fit] =
| |
| == [https://en.wikipedia.org/wiki/Pearson%27s_chi-squared_test Chi-square tests] ==
| |
| * [http://freakonometrics.hypotheses.org/20531 An application of chi-square tests]
| |
|
| |
| == Fitting distribution ==
| |
| [https://magesblog.com/post/2011-12-01-fitting-distributions-with-r/ Fitting distributions with R]
| |
|
| |
| = Contingency Tables =
| |
| == [https://en.wikipedia.org/wiki/Odds_ratio Odds ratio and Risk ratio] ==
| |
| The ratio of the odds of an event occurring in one group to the odds of it occurring in another group
| |
| <pre>
| |
| drawn | not drawn |
| |
| -------------------------------------
| |
| white | A | B | Wh
| |
| -------------------------------------
| |
| black | C | D | Bk
| |
| </pre>
| |
| * Odds Ratio = (A / C) / (B / D) = (AD) / (BC)
| |
| * Risk Ratio = (A / Wh) / (C / Bk)
| |
|
| |
| == Hypergeometric, One-tailed Fisher exact test ==
| |
| * https://www.bioconductor.org/help/course-materials/2009/SeattleApr09/gsea/ (Are interesting features over-represented? or are selected genes more often in the ''GO category'' than expected by chance?)
| |
| * https://en.wikipedia.org/wiki/Hypergeometric_distribution. '' In a test for over-representation of successes in the sample, the hypergeometric p-value is calculated as the probability of randomly drawing '''k''' or more successes from the population in '''n''' total draws. In a test for under-representation, the p-value is the probability of randomly drawing '''k''' or fewer successes.''
| |
| * http://stats.stackexchange.com/questions/62235/one-tailed-fishers-exact-test-and-the-hypergeometric-distribution
| |
| * Two sided hypergeometric test
| |
| ** http://stats.stackexchange.com/questions/155189/how-to-make-a-two-tailed-hypergeometric-test
| |
| ** http://stats.stackexchange.com/questions/140107/p-value-in-a-two-tail-test-with-asymmetric-null-distribution
| |
| ** http://stats.stackexchange.com/questions/19195/explaining-two-tailed-tests
| |
| * https://www.biostars.org/p/90662/ When computing the p-value (tail probability), consider to use 1 - Prob(observed -1) instead of 1 - Prob(observed) for discrete distribution.
| |
| * https://stat.ethz.ch/R-manual/R-devel/library/stats/html/Hypergeometric.html p(x) = choose(m, x) choose(n, k-x) / choose(m+n, k).
| |
| <pre>
| |
| drawn | not drawn |
| |
| -------------------------------------
| |
| white | x | | m
| |
| -------------------------------------
| |
| black | k-x | | n
| |
| -------------------------------------
| |
| | k | | m+n
| |
| </pre>
| |
|
| |
| For example, k=100, m=100, m+n=1000,
| |
| <syntaxhighlight lang='rsplus'>
| |
| > 1 - phyper(10, 100, 10^3-100, 100, log.p=F)
| |
| [1] 0.4160339
| |
| > a <- dhyper(0:10, 100, 10^3-100, 100)
| |
| > cumsum(rev(a))
| |
| [1] 1.566158e-140 1.409558e-135 3.136408e-131 3.067025e-127 1.668004e-123 5.739613e-120 1.355765e-116
| |
| [8] 2.325536e-113 3.018276e-110 3.058586e-107 2.480543e-104 1.642534e-101 9.027724e-99 4.175767e-96
| |
| [15] 1.644702e-93 5.572070e-91 1.638079e-88 4.210963e-86 9.530281e-84 1.910424e-81 3.410345e-79
| |
| [22] 5.447786e-77 7.821658e-75 1.013356e-72 1.189000e-70 1.267638e-68 1.231736e-66 1.093852e-64
| |
| [29] 8.900857e-63 6.652193e-61 4.576232e-59 2.903632e-57 1.702481e-55 9.240350e-54 4.650130e-52
| |
| [36] 2.173043e-50 9.442985e-49 3.820823e-47 1.441257e-45 5.074077e-44 1.669028e-42 5.134399e-41
| |
| [43] 1.478542e-39 3.989016e-38 1.009089e-36 2.395206e-35 5.338260e-34 1.117816e-32 2.200410e-31
| |
| [50] 4.074043e-30 7.098105e-29 1.164233e-27 1.798390e-26 2.617103e-25 3.589044e-24 4.639451e-23
| |
| [57] 5.654244e-22 6.497925e-21 7.042397e-20 7.198582e-19 6.940175e-18 6.310859e-17 5.412268e-16
| |
| [64] 4.377256e-15 3.338067e-14 2.399811e-13 1.626091e-12 1.038184e-11 6.243346e-11 3.535115e-10
| |
| [71] 1.883810e-09 9.442711e-09 4.449741e-08 1.970041e-07 8.188671e-07 3.193112e-06 1.167109e-05
| |
| [78] 3.994913e-05 1.279299e-04 3.828641e-04 1.069633e-03 2.786293e-03 6.759071e-03 1.525017e-02
| |
| [85] 3.196401e-02 6.216690e-02 1.120899e-01 1.872547e-01 2.898395e-01 4.160339e-01 5.550192e-01
| |
| [92] 6.909666e-01 8.079129e-01 8.953150e-01 9.511926e-01 9.811343e-01 9.942110e-01 9.986807e-01
| |
| [99] 9.998018e-01 9.999853e-01 1.000000e+00
| |
|
| |
| # Density plot
| |
| plot(0:100, dhyper(0:100, 100, 10^3-100, 100), type='h')
| |
| </syntaxhighlight>
| |
| [[:File:Dhyper.svg]]
| |
|
| |
| Moreover,
| |
| <pre>
| |
| 1 - phyper(q=10, m, n, k)
| |
| = 1 - sum_{x=0}^{x=10} phyper(x, m, n, k)
| |
| = 1 - sum(a[1:11]) # R's index starts from 1.
| |
| </pre>
| |
|
| |
| Another example is the data from [https://david.ncifcrf.gov/helps/functional_annotation.html#fisher the functional annotation tool] in DAVID.
| |
| <pre>
| |
| | gene list | not gene list |
| |
| -------------------------------------------------------
| |
| pathway | 3 (q) | | 40 (m)
| |
| -------------------------------------------------------
| |
| not in pathway | 297 | | 29960 (n)
| |
| -------------------------------------------------------
| |
| | 300 (k) | | 30000
| |
| </pre>
| |
| The one-tailed p-value from the hypergeometric test is calculated as 1 - phyper(3-1, 40, 29960, 300) = 0.0074.
| |
|
| |
| == [https://en.wikipedia.org/wiki/Fisher%27s_exact_test Fisher's exact test] ==
| |
| Following the above example from the DAVID website, the following R command calculates the Fisher exact test for independence in 2x2 contingency tables.
| |
| <syntaxhighlight lang='rsplus'>
| |
| > fisher.test(matrix(c(3, 40, 297, 29960), nr=2)) # alternative = "two.sided" by default
| |
|
| |
| Fisher's Exact Test for Count Data
| |
|
| |
| data: matrix(c(3, 40, 297, 29960), nr = 2)
| |
| p-value = 0.008853
| |
| alternative hypothesis: true odds ratio is not equal to 1
| |
| 95 percent confidence interval:
| |
| 1.488738 23.966741
| |
| sample estimates:
| |
| odds ratio
| |
| 7.564602
| |
|
| |
| > fisher.test(matrix(c(3, 40, 297, 29960), nr=2), alternative="greater")
| |
|
| |
| Fisher's Exact Test for Count Data
| |
|
| |
| data: matrix(c(3, 40, 297, 29960), nr = 2)
| |
| p-value = 0.008853
| |
| alternative hypothesis: true odds ratio is greater than 1
| |
| 95 percent confidence interval:
| |
| 1.973 Inf
| |
| sample estimates:
| |
| odds ratio
| |
| 7.564602
| |
|
| |
| > fisher.test(matrix(c(3, 40, 297, 29960), nr=2), alternative="less")
| |
|
| |
| Fisher's Exact Test for Count Data
| |
|
| |
| data: matrix(c(3, 40, 297, 29960), nr = 2)
| |
| p-value = 0.9991
| |
| alternative hypothesis: true odds ratio is less than 1
| |
| 95 percent confidence interval:
| |
| 0.00000 20.90259
| |
| sample estimates:
| |
| odds ratio
| |
| 7.564602
| |
| </syntaxhighlight>
| |
|
| |
| From the documentation of [https://stat.ethz.ch/R-manual/R-devel/library/stats/html/fisher.test.html fisher.test]
| |
| <pre>
| |
| Usage:
| |
| fisher.test(x, y = NULL, workspace = 200000, hybrid = FALSE,
| |
| control = list(), or = 1, alternative = "two.sided",
| |
| conf.int = TRUE, conf.level = 0.95,
| |
| simulate.p.value = FALSE, B = 2000)
| |
| </pre>
| |
| * For 2 by 2 cases, p-values are obtained directly using the (central or non-central) hypergeometric distribution.
| |
| * For 2 by 2 tables, the null of conditional independence is equivalent to the hypothesis that the odds ratio equals one.
| |
| * The alternative for a one-sided test is based on the odds ratio, so ‘alternative = "greater"’ is a test of the odds ratio being bigger than ‘or’.
| |
| * Two-sided tests are based on the probabilities of the tables, and take as ‘more extreme’ all tables with probabilities less than or equal to that of the observed table, the p-value being the sum of such probabilities.
| |
|
| |
| == Chi-square independence test ==
| |
| [https://www.rdatagen.net/post/a-little-intuition-and-simulation-behind-the-chi-square-test-of-independence-part-2/ Exploring the underlying theory of the chi-square test through simulation - part 2]
| |
|
| |
| == [https://en.wikipedia.org/wiki/Gene_set_enrichment_analysis GSEA] ==
| |
| Determines whether an a priori defined set of genes shows statistically significant, concordant differences between two biological states
| |
|
| |
| * https://www.bioconductor.org/help/course-materials/2015/SeattleApr2015/E_GeneSetEnrichment.html
| |
| * http://software.broadinstitute.org/gsea/index.jsp
| |
| * [http://www.biorxiv.org/content/biorxiv/early/2017/09/08/186288.full.pdf Statistical power of gene-set enrichment analysis is a function of gene set correlation structure] by SWANSON 2017
| |
| * [https://www.biorxiv.org/content/10.1101/674267v1 Towards a gold standard for benchmarking gene set enrichment analysis], [http://bioconductor.org/packages/release/bioc/html/GSEABenchmarkeR.html GSEABenchmarkeR] package
| |
|
| |
| Two categories of GSEA procedures:
| |
| * Competitive: compare genes in the test set relative to all other genes.
| |
| * Self-contained: whether the gene-set is more DE than one were to expect under the null of no association between two phenotype conditions (without reference to other genes in the genome). For example the method by [http://home.cc.umanitoba.ca/~psgendb/birchhomedir/doc/MeV/manual/gsea.html Jiang & Gentleman Bioinformatics 2007]
| |
|
| |
| == McNemar’s test on paired nominal data ==
| |
| https://en.wikipedia.org/wiki/McNemar%27s_test
| |
|
| |
| = Confidence vs Credibility Intervals =
| |
| http://freakonometrics.hypotheses.org/18117
| |
|
| |
| = Power analysis/Sample Size determination =
| |
| See [[Power|Power]].
| |
|
| |
| = Common covariance/correlation structures =
| |
| See [https://onlinecourses.science.psu.edu/stat502/node/228 psu.edu]. Assume covariance <math>\Sigma = (\sigma_{ij})_{p\times p} </math>
| |
|
| |
| * Diagonal structure: <math>\sigma_{ij} = 0</math> if <math>i \neq j</math>.
| |
| * Compound symmetry: <math>\sigma_{ij} = \rho</math> if <math>i \neq j</math>.
| |
| * First-order autoregressive AR(1) structure: <math>\sigma_{ij} = \rho^{|i - j|}</math>. <syntaxhighlight lang='rsplus'>
| |
| rho <- .8
| |
| p <- 5
| |
| blockMat <- rho ^ abs(matrix(1:p, p, p, byrow=T) - matrix(1:p, p, p))
| |
| </syntaxhighlight>
| |
| * Banded matrix: <math>\sigma_{ii}=1, \sigma_{i,i+1}=\sigma_{i+1,i} \neq 0, \sigma_{i,i+2}=\sigma_{i+2,i} \neq 0</math> and <math>\sigma_{ij}=0</math> for <math>|i-j| \ge 3</math>.
| |
| * Spatial Power
| |
| * Unstructured Covariance
| |
| * [https://en.wikipedia.org/wiki/Toeplitz_matrix Toeplitz structure]
| |
|
| |
| To create blocks of correlation matrix, use the "%x%" operator. See [https://www.rdocumentation.org/packages/base/versions/3.4.3/topics/kronecker kronecker()].
| |
| <syntaxhighlight lang='rsplus'>
| |
| covMat <- diag(n.blocks) %x% blockMat
| |
| </syntaxhighlight>
| |
|
| |
| = Counter/Special Examples =
| |
| == Correlated does not imply independence ==
| |
| Suppose X is a normally-distributed random variable with zero mean. Let Y = X^2. Clearly X and Y are not independent: if you know X, you also know Y. And if you know Y, you know the absolute value of X.
| |
|
| |
| The covariance of X and Y is
| |
| <pre>
| |
| Cov(X,Y) = E(XY) - E(X)E(Y) = E(X^3) - 0*E(Y) = E(X^3)
| |
| = 0,
| |
| </pre>
| |
| because the distribution of X is symmetric around zero. Thus the correlation r(X,Y) = Cov(X,Y)/Sqrt[Var(X)Var(Y)] = 0, and we have a situation where the variables are not independent, yet
| |
| have (linear) correlation r(X,Y) = 0.
| |
|
| |
| This example shows how a linear correlation coefficient does not encapsulate anything about the quadratic dependence of Y upon X.
| |
|
| |
| == Spearman vs Pearson correlation ==
| |
| Pearson benchmarks linear relationship, Spearman benchmarks monotonic relationship. https://stats.stackexchange.com/questions/8071/how-to-choose-between-pearson-and-spearman-correlation
| |
|
| |
| <pre>
| |
| x=(1:100);
| |
| y=exp(x);
| |
| cor(x,y, method='spearman') # 1
| |
| cor(x,y, method='pearson') # .25
| |
| </pre>
| |
|
| |
| == Spearman vs Wilcoxon ==
| |
| By [http://www.talkstats.com/threads/wilcoxon-signed-rank-test-or-spearmans-rho.42395/ this post]
| |
| * Wilcoxon used to compare categorical versus non-normal continuous variable
| |
| * Spearman's rho used to compare two continuous (including '''ordinal''') variables that one or both aren't normally distributed
| |
|
| |
| == Spearman vs [https://en.wikipedia.org/wiki/Kendall_rank_correlation_coefficient Kendall correlation] ==
| |
| * Kendall's tau coefficient (after the Greek letter τ), is a statistic used to measure the '''ordinal''' association between two measured quantities.
| |
| * [https://stats.stackexchange.com/questions/3943/kendall-tau-or-spearmans-rho Kendall Tau or Spearman's rho?]
| |
|
| |
| == [http://en.wikipedia.org/wiki/Anscombe%27s_quartet Anscombe quartet] ==
| |
|
| |
| Four datasets have almost same properties: same mean in X, same mean in Y, same variance in X, (almost) same variance in Y, same correlation in X and Y, same linear regression.
| |
|
| |
| [[:File:Anscombe quartet 3.svg]]
| |
|
| |
| == The real meaning of spurious correlations ==
| |
| https://nsaunders.wordpress.com/2017/02/03/the-real-meaning-of-spurious-correlations/
| |
| <syntaxhighlight lang='rsplus'>
| |
| library(ggplot2)
| |
|
| |
| set.seed(123)
| |
| spurious_data <- data.frame(x = rnorm(500, 10, 1),
| |
| y = rnorm(500, 10, 1),
| |
| z = rnorm(500, 30, 3))
| |
| cor(spurious_data$x, spurious_data$y)
| |
| # [1] -0.05943856
| |
| spurious_data %>% ggplot(aes(x, y)) + geom_point(alpha = 0.3) +
| |
| theme_bw() + labs(title = "Plot of y versus x for 500 observations with N(10, 1)")
| |
|
| |
| cor(spurious_data$x / spurious_data$z, spurious_data$y / spurious_data$z)
| |
| # [1] 0.4517972
| |
| spurious_data %>% ggplot(aes(x/z, y/z)) + geom_point(aes(color = z), alpha = 0.5) +
| |
| theme_bw() + geom_smooth(method = "lm") +
| |
| scale_color_gradientn(colours = c("red", "white", "blue")) +
| |
| labs(title = "Plot of y/z versus x/z for 500 observations with x,y N(10, 1); z N(30, 3)")
| |
|
| |
| spurious_data$z <- rnorm(500, 30, 6)
| |
| cor(spurious_data$x / spurious_data$z, spurious_data$y / spurious_data$z)
| |
| # [1] 0.8424597
| |
| spurious_data %>% ggplot(aes(x/z, y/z)) + geom_point(aes(color = z), alpha = 0.5) +
| |
| theme_bw() + geom_smooth(method = "lm") +
| |
| scale_color_gradientn(colours = c("red", "white", "blue")) +
| |
| labs(title = "Plot of y/z versus x/z for 500 observations with x,y N(10, 1); z N(30, 6)")
| |
| </syntaxhighlight>
| |
|
| |
| = Time series =
| |
| * [http://ellisp.github.io/blog/2016/12/07/arima-prediction-intervals Why time series forecasts prediction intervals aren't as good as we'd hope]
| |
|
| |
| == Structural change ==
| |
| [https://datascienceplus.com/structural-changes-in-global-warming/ Structural Changes in Global Warming]
| |
|
| |
| == AR(1) processes and random walks ==
| |
| [https://fdabl.github.io/r/Spurious-Correlation.html Spurious correlations and random walks]
| |
|
| |
| = Measurement Error model =
| |
| * [https://en.wikipedia.org/wiki/Errors-in-variables_models Errors-in-variables models/errors-in-variables models or measurement error models]
| |
| * [https://onlinelibrary.wiley.com/doi/10.1111/biom.13112 Simulation‐‐Selection‐‐Extrapolation: Estimation in High‐‐Dimensional Errors‐‐in‐‐Variables Models] Nghiem 2019
| |
|
| |
| = Dictionary =
| |
| * '''Prognosis''' is the probability that an event or diagnosis will result in a particular outcome.
| |
| ** For example, on the paper [http://clincancerres.aacrjournals.org/content/18/21/6065.figures-only Developing and Validating Continuous Genomic Signatures in Randomized Clinical Trials for Predictive Medicine] by Matsui 2012, the prognostic score .1 (0.9) represents a '''good (poor)''' prognosis.
| |
| ** Prostate cancer has a much higher one-year overall survival rate than pancreatic cancer, and thus has a better prognosis. See [https://en.wikipedia.org/wiki/Survival_rate Survival rate] in wikipedia.
| |
|
| |
| = Data =
| |
| == Eleven quick tips for finding research data ==
| |
| http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1006038
| |
|
| |
| = Books =
| |
| * [https://leanpub.com/biostatmethods Methods in Biostatistics with R] ($)
| |
| * [http://web.stanford.edu/class/bios221/book/ Modern Statistics for Modern Biology] (free)
| |
| * Principles of Applied Statistics, by David Cox & Christl Donnelly
| |
| * [https://www.amazon.com/Freedman-Robert-Pisani-Statistics-Hardcover/dp/B004QNRMDK/ Statistics] by David Freedman,Robert Pisani, Roger Purves
| |
| * [https://onlinelibrary.wiley.com/topic/browse/000113 Wiley Online Library -> Statistics] (Access by NIH Library)
| |
| * [https://web.stanford.edu/~hastie/CASI/ Computer Age Statistical Inference: Algorithms, Evidence and Data Science] by Efron and Hastie 2016
| |
|
| |
| = Social =
| |
| == JSM ==
| |
| * 2019
| |
| ** [https://minecr.shinyapps.io/jsm2019-schedule/ JSM 2019] and the [http://www.citizen-statistician.org/2019/07/shiny-for-jsm-2019/ post].
| |
| ** [https://rviews.rstudio.com/2019/07/19/an-r-users-guide-to-jsm-2019/ An R Users Guide to JSM 2019]
| |
|
| |
| == Following ==
| |
| * [http://jtleek.com/ Jeff Leek], https://twitter.com/jtleek
| |
| * Revolutions, http://blog.revolutionanalytics.com/
| |
| * RStudio Blog, https://blog.rstudio.com/
| |
| * Sean Davis, https://twitter.com/seandavis12, https://github.com/seandavi
| |
| * [http://stephenturner.us/post/ Stephen Turner], https://twitter.com/genetics_blog
| |