Search

The Online Encyclopedia and Dictionary

 
     
 

Encyclopedia

Dictionary

Quotes

   
 

Bias (statistics)

In statistics, a biased estimator is one that for some reason on average over- or underestimates what is being estimated. The word bias has at least two different senses in statistics, one referring to something considered very bad, the other referring to something that can at times produce results more useful and closer to the truth than an insistence on being "unbiased."

Contents

The bad kind

One meaning is involved in what is called a biased sample: If some elements are more likely to be chosen in the sample than others, and those that are have a higher or lower value of the quantity being estimated, the outcome will be higher or lower than the true value.

A famous case of what can go wrong when using a biased sample is found in the 1936 US presidential election polls. The Literary Digest held a poll that forecast that Alfred M. Landon would defeat Franklin Delano Roosevelt by 57% to 43%. George Gallup, using a much smaller sample (300,000 rather than 2,000,000), predicted Roosevelt would win, and he was right. What went wrong with the Literary Digest poll? They had used lists of telephone and automobile owners to select their sample. In those days, these were luxuries, so their sample consisted mainly of middle- and upper-class citizens. These voted in majority for Landon, but the lower classes voted for Roosevelt. Because their sample was biased towards wealthier citizens, their result was incorrect.

This kind of bias is usually regarded as a worse problem than statistical noise: Problems with statistical noise can be lessened by enlarging the sample, but a biased sample will not go away that easily. In particular, a meta-analysis will distill good data for studies that themselves suffer from statistical noise, but a meta-analysis of biased studies will be biased itself.

The sometimes-good kind

Another kind of bias in statistics does not involve biased samples, but does involve the use of a statistic whose average value differs from the value of the quantity being estimated. Suppose we are trying to estimate the parameter θ using an estimator \hat{\theta} (that is, some function of the observed data). Then the bias of \hat{\theta} is defined to be

\operatorname{E}(\hat{\theta})-\theta.

In words, this would be "the expected value of the estimator \hat{\theta} minus the true value θ". This may be rewritten as

\operatorname{E}(\hat{\theta}-\theta).

which would read "the expected value of the difference between the estimator and the true value" (the expected value of θ is θ).

For example, suppose X1, ..., Xn are independent and identically distributed random variables with expectation μ and variance σ2. Let

\overline{X}=(X_1+\cdots+X_n)/n

be the "sample average", and let

S^2=\frac{1}{n}\sum_{i=1}^n(X_i-\overline{X}\,)^2

be a "sample variance". Then S2 is a "biased estimator" of σ2 because

\operatorname{E}(S^2)=\frac{n-1}{n}\sigma^2\neq\sigma^2.

But if the sample comes from a normally distributed population, then this biased estimator is, by the commonly used criterion of "mean squared error", actually better (but only very slightly) than the unbiased estimator that results from putting n − 1 in the denominator where n appears in the definition of S2 above. Even then the square root of the unbiased estimator of the population variance is not an unbiased estimator of the population standard deviation; for a non-linear function f and an unbiased estimator U of a parameter p, f(U) is usually not an unbiased estimator of f(p).

A far more extreme case of a biased estimator being better than any unbiased estimator is well-known: Suppose X has a Poisson distribution with expectation λ. It is desired to estimate

\operatorname{P}(X=0)^2=e^{-2\lambda}.\quad

The only function of the data constituting an unbiased estimator is

\delta(X)=(-1)^X.\quad

If the observed value of X is 100, then the estimate is 1, although the true value of the quantity being estimated is obviously very likely to be near 0, which is the opposite extreme. And if X is observed to be 101, then the estimate is even more absurd: it is −1, although the quantity being estimated obviously must be positive. The (biased) maximum-likelihood estimator

e^{-2X}\quad

is better than this unbiased estimator in the sense that the mean squared error

e^{-4\lambda}-2e^{\lambda(1/e^2-3)}+e^{\lambda(1/e^4-1)}

is smaller. Compare the unbiased estimator's MSE of

1 - e - 4λ

The MSE is a function of the true value λ. The bias of the maximum-likelihood estimator is:

e^{-2\lambda}-e^{\lambda(1/e^2-1)}.

The bias of maximum-likelihood estimators can be substantial. Consider a case where n tickets numbered from 1 through to n are placed in a box and one is selected at random, giving a value X. If n is unknown, then the maximum-likelihood estimator of n is X, even though the expectation of X is only n/2; we can only be certain that n is at least X and is probably more. In this case, the natural unbiased estimator is 2X.

See also

External link

Last updated: 05-13-2005 07:56:04