Bias of an estimator: Difference between revisions

Jump to navigation Jump to search
Brian Blank (talk | contribs)
No edit summary
WikiBot (talk | contribs)
m Bot: Automated text replacement (-{{SIB}} + & -{{EH}} + & -{{EJ}} + & -{{Editor Help}} + & -{{Editor Join}} +)
 
Line 1: Line 1:
{{SI}}
{{SI}}
{{EH}}
 


==Overview==
==Overview==
Line 89: Line 89:
[[Category:Statistical theory]]
[[Category:Statistical theory]]


{{SIB}}
 
{{WH}}
{{WH}}
{{WS}}
{{WS}}

Latest revision as of 22:54, 8 August 2012

WikiDoc Resources for Bias of an estimator

Articles

Most recent articles on Bias of an estimator

Most cited articles on Bias of an estimator

Review articles on Bias of an estimator

Articles on Bias of an estimator in N Eng J Med, Lancet, BMJ

Media

Powerpoint slides on Bias of an estimator

Images of Bias of an estimator

Photos of Bias of an estimator

Podcasts & MP3s on Bias of an estimator

Videos on Bias of an estimator

Evidence Based Medicine

Cochrane Collaboration on Bias of an estimator

Bandolier on Bias of an estimator

TRIP on Bias of an estimator

Clinical Trials

Ongoing Trials on Bias of an estimator at Clinical Trials.gov

Trial results on Bias of an estimator

Clinical Trials on Bias of an estimator at Google

Guidelines / Policies / Govt

US National Guidelines Clearinghouse on Bias of an estimator

NICE Guidance on Bias of an estimator

NHS PRODIGY Guidance

FDA on Bias of an estimator

CDC on Bias of an estimator

Books

Books on Bias of an estimator

News

Bias of an estimator in the news

Be alerted to news on Bias of an estimator

News trends on Bias of an estimator

Commentary

Blogs on Bias of an estimator

Definitions

Definitions of Bias of an estimator

Patient Resources / Community

Patient resources on Bias of an estimator

Discussion groups on Bias of an estimator

Patient Handouts on Bias of an estimator

Directions to Hospitals Treating Bias of an estimator

Risk calculators and risk factors for Bias of an estimator

Healthcare Provider Resources

Symptoms of Bias of an estimator

Causes & Risk Factors for Bias of an estimator

Diagnostic studies for Bias of an estimator

Treatment of Bias of an estimator

Continuing Medical Education (CME)

CME Programs on Bias of an estimator

International

Bias of an estimator en Espanol

Bias of an estimator en Francais

Business

Bias of an estimator in the Marketplace

Patents on Bias of an estimator

Experimental / Informatics

List of terms related to Bias of an estimator


Overview

In statistics, the difference between an estimator's expected value and the true value of the parameter being estimated is called the bias. An estimator or decision rule having nonzero bias is said to be biased.

Although the term bias sounds pejorative, it is not necessarily used in that way in statistics. Biased estimators may have desirable properties. Not only do they sometimes have a smaller mean squared error than any unbiased estimator, but in some cases the only unbiased estimators are not even within the convex hull of the parameter space, so their use is absurd.

Definition

Suppose we are trying to estimate the parameter <math>\theta</math> using an estimator <math>\widehat{\theta}</math> (that is, some function of the observed data). Then the bias of <math>\widehat{\theta}</math> is defined to be

<math>

\operatorname{E}(\widehat{\theta})-\theta.\, </math>

In words, this would be "the expected value of the estimator <math>\widehat{\theta}</math> minus the true value <math>\theta</math>." This may be rewritten as

<math>

\operatorname{E}(\widehat{\theta}-\theta).\, </math>

which would read "the expected value of the difference between the estimator and the true value" (the expected value of <math>\theta</math> is precisely <math>\theta</math>).

Examples

Estimating variance

Suppose X1, ..., Xn are independent and identically distributed normal random variables with expectation μ and variance σ2. Let

<math>\overline{X}=(X_1+\cdots+X_n)/n</math>

be the "sample average", and let

<math>S^2=\frac{1}{n}\sum_{i=1}^n(X_i-\overline{X}\,)^2</math>

be a "sample variance". Then S2 is a "biased estimator" of σ2 because

<math>\operatorname{E}(S^2)=\frac{n-1}{n}\sigma^2\neq\sigma^2.</math>

Note that when a transformation is applied to an unbiased estimator, the result is not necessarily itself an unbiased estimate of its corresponding population statistic. That is, for a non-linear function f and an unbiased estimator U of a parameter p, f(U) is usually not an unbiased estimator of f(p). For example the square root of the unbiased estimator of the population variance is not an unbiased estimator of the population standard deviation.

Bias is not the only consideration when choosing a statistic, however. Bias refers to the central tendency of the sampling distribution of a statistic, but the variance of the sampling distribution can also be an important consideration. Specifically, statistics with smaller sampling variances will yield greater statistical power. For example, while S2 above is more biased than the traditional sample calculation

<math>S_\mathrm{unbiased}^2=\frac{1}{n-1}\sum_{i=1}^n(X_i-\overline{X}\,)^2,</math>

S2 has a lower estimation variability than S2unbiased because the denominator dividing the sum of squares is larger in the calculation of S2, resulting in a smaller scale of final values, and therefore lower estimation variability, than that of S2unbiased. Practically, this demonstrates that for some applications (where the amount of bias can be equated between groups/conditions) it is possible that a biased estimator can prove to be a more powerful, and therefore useful, statistic. The use of n − 1 rather than n is sometimes called Bessel's correction.

Estimating a Poisson probability

A far more extreme case of a biased estimator being better than any unbiased estimator is well-known: Suppose X has a Poisson distribution with expectation λ. It is desired to estimate

<math>\operatorname{P}(X=0)^2=e^{-2\lambda}.\quad</math>

(For example, when incoming calls at a telephone switchboard are modeled as a Poisson process, and λ is the average number of calls per minute, then e−2λ is the probability that no calls arrive in the next two minutes.)

The only function of the data constituting an unbiased estimator is

<math>\delta(X)=(-1)^X.\quad</math>

If the observed value of X is 100, then the estimate is 1, although the true value of the quantity being estimated is obviously very likely to be near 0, which is the opposite extreme. And if X is observed to be 101, then the estimate is even more absurd: it is −1, although the quantity being estimated obviously must be positive.

The (biased) maximum likelihood estimator

<math>e^{-2X}\quad</math>

is far better than this unbiased estimator. Not only is its value always positive, but it is also more accurate in the sense that its mean squared error

<math>e^{-4\lambda}-2e^{\lambda(1/e^2-3)}+e^{\lambda(1/e^4-1)}</math>

is smaller; compare the unbiased estimator's MSE of

<math>1-e^{-4\lambda}</math>

The MSE is a function of the true value λ. The bias of the maximum-likelihood estimator is:

<math>e^{-2\lambda}-e^{\lambda(1/e^2-1)}</math>.

Maximum of a discrete uniform distribution

The bias of maximum-likelihood estimators can be substantial. Consider a case where n tickets numbered from 1 through to n are placed in a box and one is selected at random, giving a value X. If n is unknown, then the maximum-likelihood estimator of n is X, even though the expectation of X is only (n + 1)/2; we can only be certain that n is at least X and is probably more. In this case, the natural unbiased estimator is 2X − 1.

See also

External links


Template:WH Template:WS