Jump to content

Statistics: Difference between revisions

 
(4 intermediate revisions by the same user not shown)
Line 31: Line 31:


===Uniformly Minimum Variance Unbiased Estimator (UMVUE)===
===Uniformly Minimum Variance Unbiased Estimator (UMVUE)===
{{main | Wikipedia: Minimum-variance unbiased estimator}}
UMVUE, sometimes called MVUE or UMVU.<br>
UMVUE, sometimes called MVUE or UMVU.<br>
See [[Wikipedia: Lehmann-Scheffe Theorem]]<br>
See [[Wikipedia: Lehmann–Scheffé theorem]]<br>
An unbiased estimator of a complete-sufficient statistics is a UMVUE.<br>
An unbiased estimator of a complete-sufficient statistics is a UMVUE.<br>
In general, you should find a complete sufficient statistic using the property of exponential families.<br>
In general, you should find a complete sufficient statistic using the property of exponential families.<br>
Then make it unbiased with some factors to get the UMVUE.<br>
Then make it unbiased with some factors to get the UMVUE.<br>
===Properties===
====Unbiased====
An estimator <math>\hat{\theta}</math> is unbiased for <math>\theta</math> if <math>E[\hat{\theta}] = \theta</math>
* <math>X_n</math> is unbiased for <math>E[X]</math> but is not consistent
====Consistent====
An estimator <math>\hat{\theta}</math> is consistent for <math>\theta</math> if it converges in probability to <math>\theta</math>
* Example: <math>\frac{1}{n}\sum (X-\bar{X})^2</math> is a consistent estimator
: for <math>\sigma^2</math> for <math>N(\mu, \sigma^2</math> but is not unbiased.


===Efficiency===
===Efficiency===
Line 43: Line 54:
* <math>I(\theta) = E[ (\frac{\partial}{\partial \theta} \log f(X; \theta) )^2 | \theta]</math>
* <math>I(\theta) = E[ (\frac{\partial}{\partial \theta} \log f(X; \theta) )^2 | \theta]</math>
* or if <math>\log f(x)</math> is twice differentiable <math>I(\theta) = -E[ \frac{\partial^2}{\partial \theta^2} \log f(X; \theta) | \theta]</math>
* or if <math>\log f(x)</math> is twice differentiable <math>I(\theta) = -E[ \frac{\partial^2}{\partial \theta^2} \log f(X; \theta) | \theta]</math>
* <math>I_n(\theta) = n*I(\theta)</math> is the fisher information of the sample. Replace <math>f</math> with your full likelihood.


====Cramer-Rao Lower Bound====
====Cramer-Rao Lower Bound====