Probability: Difference between revisions
No edit summary |
|||
Line 1: | Line 1: | ||
Introductory Probability as taught in [https://www.pearson.com/us/higher-education/program/Ross-First-Course-in-Probability-A-9th-Edition/PGM110742.html Sheldon Ross' book] | Introductory Probability as taught in [https://www.pearson.com/us/higher-education/program/Ross-First-Course-in-Probability-A-9th-Edition/PGM110742.html Sheldon Ross' book] | ||
==Axioms of Probability== | |||
==Expectation and Variance== | |||
Some definitions and properties. | |||
===Total Expection=== | |||
Dr. Xu refers to this as the smooth property. | |||
<math>E(X) = E(E(X|Y))</math> | |||
{{hidden | Proof |}} | |||
===Total Variance=== | |||
This one is not used as often on tests as total expectation | |||
<math>Var(Y) = E(Var(Y|X)) + Var(E(Y | X)</math> | |||
{{hidden | Proof |}} | |||
==Delta Method== | |||
See [https://en.wikipedia.org/wiki/Delta_method Wikipedia] | |||
==Limit Theorems== | |||
===Markov's Inequality=== | |||
===Chebyshev's Inequality=== | |||
===Central Limit Theorem=== | |||
===Law of Large Numbers=== | |||
==Relationships between distributions== | ==Relationships between distributions== |
Revision as of 03:21, 5 November 2019
Introductory Probability as taught in Sheldon Ross' book
Axioms of Probability
Expectation and Variance
Some definitions and properties.
Total Expection
Dr. Xu refers to this as the smooth property. \(\displaystyle E(X) = E(E(X|Y))\)
Total Variance
This one is not used as often on tests as total expectation \(\displaystyle Var(Y) = E(Var(Y|X)) + Var(E(Y | X)\)
Delta Method
See Wikipedia
Limit Theorems
Markov's Inequality
Chebyshev's Inequality
Central Limit Theorem
Law of Large Numbers
Relationships between distributions
This is important for tests.
See Relationships among probability distributions.
Poisson Distributions
Sum of poission is poisson sum of lambda.
Normal Distributions
If \(\displaystyle X_1 \sim N(\mu_1, \sigma_1^2)\) and \(\displaystyle X_2 \sim N(\mu_2, \sigma_2^2)\) then \(\displaystyle \lambda_1 X_1 + \lambda_2 X_2 \sim N(\lambda_1 \mu_1 + \lambda_2 X_2, \lambda_1^2 \sigma_1^2 + \lambda_2^2 + \sigma_2^2)\) for any \(\displaystyle \lambda_1, \lambda_2 \in \mathbb{R}\)
Gamma Distributions
Note exponential distributions are also Gamma distrubitions
If \(\displaystyle X \sim \Gamma(k, \theta)\) then \(\displaystyle \lambda X \sim \Gamma(k, c\theta)\).
If \(\displaystyle X_1 \sim \Gamma(k_1, \theta)\) and \(\displaystyle X_2 \sim \Gamma(k_2, \theta)\) then \(\displaystyle X_2 + X_2 \sim \Gamma(k_1 + k_2, \theta)\).
If \(\displaystyle X_1 \sim \Gamma(\alpha, \theta)\) and \(\displaystyle X_2 \sim \Gamma(\beta, \theta)\), then \(\displaystyle \frac{X_1}{X_1 + X_2} \sim B(\alpha, \beta)\)
T-distribution
Ratio of normal and squared-root of Chi-sq distribution yields T-distribution.
Chi-Sq Distribution
The ratio of two normalized Chi-sq is an F-distributions
F Distribution
Too many. See the Wikipedia Page. Most important are Chi-sq and T distribution