5,321
edits
(6 intermediate revisions by the same user not shown) | |||
Line 28: | Line 28: | ||
;Notes | ;Notes | ||
* If <math>\hat{\theta}</math> is the MLE for <math>\theta</math> then the MLE for <math>g(\theta)</math> is <math>g(\hat{theta})</math> | * If <math>\hat{\theta}</math> is the MLE for <math>\theta</math> then the MLE for <math>g(\theta)</math> is <math>g(\hat{\theta})</math> | ||
===Uniformly Minimum Variance Unbiased Estimator (UMVUE)=== | ===Uniformly Minimum Variance Unbiased Estimator (UMVUE)=== | ||
{{main | Wikipedia: Minimum-variance unbiased estimator}} | |||
UMVUE, sometimes called MVUE or UMVU.<br> | UMVUE, sometimes called MVUE or UMVU.<br> | ||
See [[Wikipedia: | See [[Wikipedia: Lehmann–Scheffé theorem]]<br> | ||
An unbiased estimator of a complete-sufficient statistics is a UMVUE.<br> | An unbiased estimator of a complete-sufficient statistics is a UMVUE.<br> | ||
In general, you should find a complete sufficient statistic using the property of exponential families.<br> | In general, you should find a complete sufficient statistic using the property of exponential families.<br> | ||
Then make it unbiased with some factors to get the UMVUE.<br> | Then make it unbiased with some factors to get the UMVUE.<br> | ||
===Properties=== | |||
====Unbiased==== | |||
An estimator <math>\hat{\theta}</math> is unbiased for <math>\theta</math> if <math>E[\hat{\theta}] = \theta</math> | |||
* <math>X_n</math> is unbiased for <math>E[X]</math> but is not consistent | |||
====Consistent==== | |||
An estimator <math>\hat{\theta}</math> is consistent for <math>\theta</math> if it converges in probability to <math>\theta</math> | |||
* Example: <math>\frac{1}{n}\sum (X-\bar{X})^2</math> is a consistent estimator | |||
: for <math>\sigma^2</math> for <math>N(\mu, \sigma^2</math> but is not unbiased. | |||
===Efficiency=== | ===Efficiency=== | ||
Line 43: | Line 54: | ||
* <math>I(\theta) = E[ (\frac{\partial}{\partial \theta} \log f(X; \theta) )^2 | \theta]</math> | * <math>I(\theta) = E[ (\frac{\partial}{\partial \theta} \log f(X; \theta) )^2 | \theta]</math> | ||
* or if <math>\log f(x)</math> is twice differentiable <math>I(\theta) = -E[ \frac{\partial^2}{\partial \theta^2} \log f(X; \theta) | \theta]</math> | * or if <math>\log f(x)</math> is twice differentiable <math>I(\theta) = -E[ \frac{\partial^2}{\partial \theta^2} \log f(X; \theta) | \theta]</math> | ||
* <math>I_n(\theta) = n*I(\theta)</math> is the fisher information of the sample. Replace <math>f</math> with your full likelihood. | |||
====Cramer-Rao Lower Bound==== | ====Cramer-Rao Lower Bound==== | ||
{{main | Wikipedia: | {{main | Wikipedia: Cramér–Rao bound}} | ||
Given an estimator <math>T(X)</math>, let <math>\psi(\theta)=E[T(X)]</math>. | Given an estimator <math>T(X)</math>, let <math>\psi(\theta)=E[T(X)]</math>. | ||
Then <math>Var(T) \geq \frac{(\psi'(\theta))^2}{I(\theta)}</math> | Then <math>Var(T) \geq \frac{(\psi'(\theta))^2}{I(\theta)}</math> |