Probability: Difference between revisions
| (6 intermediate revisions by the same user not shown) | |||
| Line 32: | Line 32: | ||
===Joint Random Variables=== | ===Joint Random Variables=== | ||
Two random variables are independant | Two random variables are independant iff <math>f_{X,Y}(x,y) = f_X(x) f_Y(y)</math>.<br> | ||
Otherwise, the marginal distribution is <math>f_X(x) = \int f_{X,Y}(x,y) dy</math>. | Otherwise, the marginal distribution is <math>f_X(x) = \int f_{X,Y}(x,y) dy</math>. | ||
| Line 38: | Line 38: | ||
Let <math>g</math> be a monotonic increasing function and <math>Y = g(X)</math>.<br> | Let <math>g</math> be a monotonic increasing function and <math>Y = g(X)</math>.<br> | ||
Then <math>F_Y(y) = P(Y \leq y) = P(X \leq g^{-1}(y)) = F_X(g^{-1}(y))</math>.<br> | Then <math>F_Y(y) = P(Y \leq y) = P(X \leq g^{-1}(y)) = F_X(g^{-1}(y))</math>.<br> | ||
And <math>f_Y(y) = | And <math>f_Y(y) = \frac{d}{dy} F_Y(y) = \frac{d}{dy} F_X(g^{-1}(y)) = f_X(g^{-1}(y)) \frac{d}{dy}g^{-1}(y)</math><br> | ||
Hence: | |||
<math display="block"> | |||
f_Y(y) = f_x(g^{-1}(y)) \frac{d}{dy} g^{-1}(y) | |||
</math> | |||
==Expectation and Variance== | ==Expectation and Variance== | ||
| Line 88: | Line 92: | ||
* <math>\bar{X} \sim N(\mu, \sigma^2 / n)</math> | * <math>\bar{X} \sim N(\mu, \sigma^2 / n)</math> | ||
* <math>(n-1)S^2 / \sigma^2 \sim \chi^2(n-1)</math> | * <math>(n-1)S^2 / \sigma^2 \sim \chi^2(n-1)</math> | ||
===Jensen's Inequality=== | |||
{{main | Wikipedia: Jensen's inequality}} | |||
Let g be a convex function (i.e. second derivative is positive). | |||
Then <math>g(E(x)) \leq E(g(x))</math>. | |||
==Moments and Moment Generating Functions== | ==Moments and Moment Generating Functions== | ||
| Line 100: | Line 109: | ||
===Moment Generating Functions=== | ===Moment Generating Functions=== | ||
To compute moments, we can use a moment generating function (MGF): | To compute moments, we can use a moment generating function (MGF): | ||
<math>M_X(t) = E(e^{tX})</math> | <math display="block">M_X(t) = E(e^{tX})</math> | ||
With the MGF, we can get any order moments by taking n derivatives and setting <math display="inline">t=0</math>. | With the MGF, we can get any order moments by taking n derivatives and setting <math display="inline">t=0</math>. | ||
; Notes | ; Notes | ||
* The MGF, if it exists, uniquely defines the distribution. | * The MGF, if it exists, uniquely defines the distribution. | ||
* The MGF of <math>X+Y</math> is <math>MGF_{X+Y}(t) = E(e^{t(X+Y)})=E(e^{tX})E(e^{tY}) = MGF_X(t) * MGF_Y(t)</math> | * The MGF of <math>X+Y</math> is <math>MGF_{X+Y}(t) = E(e^{t(X+Y)})=E(e^{tX})E(e^{tY}) = MGF_X(t) * MGF_Y(t)</math> | ||
===Characteristic function=== | ===Characteristic function=== | ||
| Line 148: | Line 158: | ||
Then the order statistics are <math>X_{(1)}, ..., X_{(n)}</math> where <math>X_{(i)}</math> represents the i'th smallest number. | Then the order statistics are <math>X_{(1)}, ..., X_{(n)}</math> where <math>X_{(i)}</math> represents the i'th smallest number. | ||
===Min and Max=== | |||
The easiest to reason about are the minimum and maximum order statistics: | The easiest to reason about are the minimum and maximum order statistics: | ||
<math>P(X_{(1)} <= x) = P(\text{min}(X_i) <= x) = 1 - P(X_1 > x, ..., X_n > x)</math> | <math>P(X_{(1)} <= x) = P(\text{min}(X_i) <= x) = 1 - P(X_1 > x, ..., X_n > x)</math> | ||
<math>P(X_{(n)} <= x) = P(\text{max}(X_i) <= x) = P(X_1 <= x, ..., X_n <= x)</math> | <math>P(X_{(n)} <= x) = P(\text{max}(X_i) <= x) = P(X_1 <= x, ..., X_n <= x)</math> | ||
===Joint PDF=== | |||
If <math>X_i</math> has pdf <math>f</math>, the joint pdf of <math>X_{(1)}, ..., X_{(n)}</math> is: | If <math>X_i</math> has pdf <math>f</math>, the joint pdf of <math>X_{(1)}, ..., X_{(n)}</math> is: | ||
<math> | <math> | ||
| Line 160: | Line 170: | ||
since there are n! ways perform a change of variables. | since there are n! ways perform a change of variables. | ||
===Individual PDF=== | |||
<math> | |||
f_{X(i)}(x) = \frac{n!}{(i-1)!(n-i)!} F(x)^{i-1} f(x) [1-F(x)]^{n-1} | |||
</math> | |||
==Inequalities and Limit Theorems== | ==Inequalities and Limit Theorems== | ||
| Line 187: | Line 200: | ||
Apply Markov's inequality:<br> | Apply Markov's inequality:<br> | ||
Let <math>Y = |X - \mu|</math><br> | Let <math>Y = |X - \mu|</math><br> | ||
Then <math>P(|X - \mu| \geq k) = P(Y \geq k) | Then:<br> | ||
<math> | |||
\begin{aligned} | |||
P(|X - \mu| \geq k) &= P(Y \geq k) \\ | |||
&= P(Y^2 \geq k^2) \\ | |||
&\leq \frac{E(Y^2)}{k^2} \\ | |||
&= \frac{E((X - \mu)^2)}{k^2} | |||
\end{aligned} | |||
</math> | |||
}} | }} | ||
* Usually used to prove convergence in probability | * Usually used to prove convergence in probability | ||