Probability

From David's Wiki
Revision as of 15:40, 7 November 2019 by David (talk | contribs)
\( \newcommand{\P}[]{\unicode{xB6}} \newcommand{\AA}[]{\unicode{x212B}} \newcommand{\empty}[]{\emptyset} \newcommand{\O}[]{\emptyset} \newcommand{\Alpha}[]{Α} \newcommand{\Beta}[]{Β} \newcommand{\Epsilon}[]{Ε} \newcommand{\Iota}[]{Ι} \newcommand{\Kappa}[]{Κ} \newcommand{\Rho}[]{Ρ} \newcommand{\Tau}[]{Τ} \newcommand{\Zeta}[]{Ζ} \newcommand{\Mu}[]{\unicode{x039C}} \newcommand{\Chi}[]{Χ} \newcommand{\Eta}[]{\unicode{x0397}} \newcommand{\Nu}[]{\unicode{x039D}} \newcommand{\Omicron}[]{\unicode{x039F}} \DeclareMathOperator{\sgn}{sgn} \def\oiint{\mathop{\vcenter{\mathchoice{\huge\unicode{x222F}\,}{\unicode{x222F}}{\unicode{x222F}}{\unicode{x222F}}}\,}\nolimits} \def\oiiint{\mathop{\vcenter{\mathchoice{\huge\unicode{x2230}\,}{\unicode{x2230}}{\unicode{x2230}}{\unicode{x2230}}}\,}\nolimits} \)

Calculus-based Probability

Axioms of Probability

  • \(\displaystyle 0 \leq P(E) \leq 1\)
  • \(\displaystyle P(S) = 1\) where \(\displaystyle S\) is your sample space
  • For mutually exclusive events \(\displaystyle E_1, E_2, ...\), \(\displaystyle P\left(\bigcup_i^\infty E_i\right) = \sum_i^\infty P(E_i)\)

Monotonicity

  • For all events \(\displaystyle A\), \(\displaystyle B\), \(\displaystyle A \subset B \implies P(A) \leq P(B)\)
Proof

Expectation and Variance

Some definitions and properties.

Total Expection

Dr. Xu refers to this as the smooth property. \(\displaystyle E(X) = E(E(X|Y))\)

Proof

Total Variance

This one is not used as often on tests as total expectation \(\displaystyle Var(Y) = E(Var(Y|X)) + Var(E(Y | X)\)

Proof

Convergence

There are 4 types of convergence typically taught in undergraduate courses.
See Wikipedia Convergence of random variables

Almost Surely

In Probability

  • Implies Convergence in distribution

In Distribution

  • Equivalent to convergence in probability if it converges to a degenerate distribution

In Mean Squared

Delta Method

See Wikipedia

Limit Theorems

Markov's Inequality

Chebyshev's Inequality

Central Limit Theorem

Very very important. Never forget this.
For any distribution, the sample mean converges in distribution to normal. Let \(\displaystyle \mu = E(x)\) and \(\displaystyle \sigma^2 = Var(x)\)
Different ways of saying the same thing:

  • \(\displaystyle \sqrt{n}(\bar{x} - \mu) \sim N(0, \sigma^2)\)
  • \(\displaystyle \frac{\sqrt{n}}{\sigma}(\bar{x} - \mu) \sim N(0, 1)\)
  • \(\displaystyle \bar{x} \sim N(\mu, \sigma^2/n)\)

Law of Large Numbers

Relationships between distributions

This is important for tests.
See Relationships among probability distributions.

Poisson Distributions

Sum of poission is poisson sum of lambda.

Normal Distributions

  • If \(\displaystyle X_1 \sim N(\mu_1, \sigma_1^2)\) and \(\displaystyle X_2 \sim N(\mu_2, \sigma_2^2)\) then \(\displaystyle \lambda_1 X_1 + \lambda_2 X_2 \sim N(\lambda_1 \mu_1 + \lambda_2 X_2, \lambda_1^2 \sigma_1^2 + \lambda_2^2 + \sigma_2^2)\) for any \(\displaystyle \lambda_1, \lambda_2 \in \mathbb{R}\)

Gamma Distributions

Note exponential distributions are also Gamma distrubitions

  • If \(\displaystyle X \sim \Gamma(k, \theta)\) then \(\displaystyle \lambda X \sim \Gamma(k, c\theta)\).
  • If \(\displaystyle X_1 \sim \Gamma(k_1, \theta)\) and \(\displaystyle X_2 \sim \Gamma(k_2, \theta)\) then \(\displaystyle X_2 + X_2 \sim \Gamma(k_1 + k_2, \theta)\).
  • If \(\displaystyle X_1 \sim \Gamma(\alpha, \theta)\) and \(\displaystyle X_2 \sim \Gamma(\beta, \theta)\), then \(\displaystyle \frac{X_1}{X_1 + X_2} \sim B(\alpha, \beta)\).

T-distribution

Ratio of normal and squared-root of Chi-sq distribution yields T-distribution.

Chi-Sq Distribution

The ratio of two normalized Chi-sq is an F-distributions

F Distribution

Too many. See the Wikipedia Page. Most important are Chi-sq and T distribution

Textbooks