
An Unbiased Estimator of the Variance
Overview
The purpose
of this document is to explain in the clearest possible language why the
"n1" is used in the formula for computing the variance of a sample.
The Mean
of a Probability Distribution (Population)
The Mean of
a distribution is its longrun average.
Alternately you could say it is the probability weighted average of each
possible value. The symbol for Mean of a
distribution is μ.
We could
restate the above by using this formula: μ = Σ [(x_{i}) * p(x_{i})]
Further
Notes On This Formula:
The
Sample Mean Of A Sample Taken From A Probability Distribution
The Sample
Mean from a distribution is the probability weighted average of each
sample. Typically we assume we are
dealing with an unbiased sample, which means that we are assuming that the
probability of each sample occurring is 1/n where n is the number of the
sample. So if you roll the die 8 times,
your sample size (your n) is 8 and the probability of each sample is ⅛. The symbol for the Sample Mean is .
We could
restate the above by using this formula: = Σ [(x_{i}) * 1/n]
Further
Notes On This Formula:
The
Variance of a Probability Distribution (Population)
The
Variance is the Expected Value of the squared deviations from the mean.
By
"deviations from the mean" we are talking about (x_{i} 
μ) where x_{i} is a single particular sample from a distribution
and μ is the mean of the distribution.
If we think about the roll of a single die then x_{i} might be 1
though 6 and μ is 3.5. In the die
roll example, since there are only 6 possible values there are also 6 possible
deviations from the mean. They are:
13.5 =
2.5 
23.5 =
1.5 
33.5 =
0.5 
43.5 =
+0.5 
53.5 =
+1.5 
63.5 =
+2.5 
By
"squared deviation from the mean" we are talking about the previous
set of numbers squared. Squaring the
number has the beneficial affect of making every number positive. Without squaring the numbers, then the expected
value of the deviation of the mean would be zero. In other words, the average of 2.5, 1.5,
0.5, +0.5, +1.5, +2.5 is zero. Here are
the numbers for the die roll example squared.
(13.5)^{2}
= (2.5)^{2 }= 6.25 
(23.5)^{2}
= (1.5)^{2 }= 2.25 
(33.5)^{2}
= (0.5)^{2 }= 0.25 
(43.5)^{2}
= (+0.5)^{2 }= 0.25 
(53.5)^{2}
= (+1.5)^{2 }= 2.25 
(63.5)^{2}
= (+2.5)^{2 }= 6.25 
By
"Expected Value" we mean the long run average. It is the probability weighted average of
the values. For the die roll example, in
the long run each side will have a 1/6^{th }chance of appearing. So the expected value of the squared deviations
of the mean is (6.25 * 1/6) + (2.25 * 1/6) + (0.25 * 1/6) + (0.25 * 1/6) +
(2.25 * 1/6) + (6.25 * 1/6) = 2 ^{11}/_{12 }= 2.91666.
In the
simple case where every x_{i }has the same probability you could write
this as (6.25 + 2.25 + 0.25 + 0.25 + 2.25 + 6.25) /6.
The Variance is denoted using this symbol: σ^{2}
Expected
Value is denoted by this symbol: E(x)
You could
write the expected value of the squared deviation of the mean like this:
σ^{2}
= E[(x_{i}  μ)^{2}]
You could
also write it like this:
σ^{2}
= Σ[(x_{i}  μ)^{2} * p(x_{i})]
Σ is a
symbol meaning to sum all values. You
would sum all values from "i" equals 1 to n, where n is the total
number of values.
* just
means to multiply
p(x) is the
probability that a particular x will occur.
We applied
this formula for the die roll example above (repeated here):
((13.5)^{2} * 1/6) + ((23.5)^{2} * 1/6) +
((33.5)^{2} * 1/6) + ((43.5)^{2} * 1/6) + ((53.5)^{2}
* 1/6) + ((63.5)^{2} * 1/6) = 2.916666.
For the
purposes of this document, we'll only be looking at cases where the probability
of each occurrence is equal. So for the
rest of the document we'll be using a slightly simpler version of the above
formula than can only be used if the probability of each occurrence is
equal. Again, we are using this because
it is simpler to read and understand and it is all we'll need.
The
variation on the formula is:
σ^{2}
= Σ[(x_{i}  μ)^{2}] / n
You can
also write the above formula like this:
σ^{2}
= Σ(x_{i}^{2})/n  μ^{2}
See
Appendix A for a derivation on this alternate form of the formula.
In the
special case of each x_{i} having the probability of 1/n (meaning p(x_{i})
is 1/n for all "i") you can use this formula:
σ^{2}
= [Σ(x_{i}^{2})  nμ^{2}] / n
σ^{2}
= Σ(x_{i}^{2})/n  μ^{2}
For the die
roll example, that would be:
Step 1) (1^{2} + 2^{2} + 3^{2} + 4^{2} + 5^{2} + 6^{2})/6  3.5^{2}
Step 2) (1
+ 4 + 9 + 16 + 25 + 36)/6  12.25
Step 3)
(91/6)  12.25
Step 4)
15.16666  12.25 = 2.916666
Note: In this case we have a probability
distribution that has an equal probability for each possibility (for each
x). That is just a coincidence. Later we will look at samples from a
distribution (with the die roll example we would be talking about rolling a
single die typically two or more times to get sample). In general, no matter what main population
looks like, we will assume samples from that population are equally likely,
that each sample has an equal probability of occurring (this is synonymous with
saying our sampling is unbiased).
The
Variance of a Sample from a Probability Distribution
The formula
for the variance of a sample taken from a Probability Distribution is:
s^{2}
= Σ[(x_{i}  )^{2}]
/ n
Important
Note: Σ[(x_{i}  μ)^{2}] ≠
Σ[(x_{i}  )^{2}]
Why? The main reason is that the sample mean()
is not equal to the "true" mean(μ) of a population (
≠ μ).
μ is
the true population mean and is constant number that can be computed when you
know all of the possible x_{i }values.
is the sample mean for a particular sample of
size n. It is a random variable that has
an expected value of μ and a standard deviation that is related to the
standard deviation of the population by this formula:
σ_{} = σ_{x} / n
The sample
mean is normally a random variable with a particular mean and variance of its
own. However, when used in the context
of the s^{2} formula (Σ[(x_{i}  )^{2}]
/ n), the sample mean should not be thought of as a random variable at
all. It is completely determined by the
x_{i} values and should not even be though of as a separate
variable. In fact, you can rewrite the
formula to get rid of the term entirely by replacing it with the formula
used to calculate it.
I.e., s^{2}
=^{ }Σ[(x_{i}  (Σ(x_{i})/n))^{2}] /
n)
For
example, when n equals 2, this becomes:
s^{2}
=^{ }Σ[(x_{i}  (x_{1 + }x_{2})/2))^{2}]
/ 2)
which
is equivalent to this:
s^{2}
= [(x_{1}  )^{2} + (x_{2}  )^{2}] / 2
You can further reduce this to this
(when n = 2):
s^{2}
= (½x_{1}  ½x_{2})^{2}
See
Appendix B for more details.
So the
question is, if s^{2} = Σ[(x_{i}  )^{2}]/n
then what is the expected value of s^{2} or what is E(s^{2})? If it is equal to σ^{2 }then it
is an unbiased estimator of σ^{2}_{. } As it turns out, s^{2} is not an
unbiased estimator of σ^{2}.
First lets write this formula:
s^{2}
= Σ[(x_{i}  _{})^{2}]
/ n
like this:
s^{2}
= [ Σ(x_{i}^{2})  n_{}^{2 }] / n
(you can
see Appendix A for more details)
Next, lets subtract μ from each x_{i}. This will leave s^{2} unchanged as long as we also subtract it from _{.}
So we start
with this:
s^{2}
= [ Σ(x_{i}^{2})  n_{}^{2 }] / n
and get
this:
s^{2}
= [ Σ(x_{i } μ)^{2}  n(_{}
μ)^{ 2 }] / n
(See
Appendix C for details)
Here we'll
find the expected value of s^{2}:
Step 1) s^{2}
= Σ[(x_{i}  )^{2}]
/ n
This is the
starting point.
Step 2) ns^{2}
= Σ[(x_{i}  )^{2}]
Multiply
both sides by n to make the formulas easier to read:
Step 3) ns^{2}
= Σ[(x_{i}  μ  + μ)^{2}]
Add and
subtract μ, the population mean.
Notice that adding and subtracting any number nets to zero, so this is
ok.
Step 4) ns^{2}
= Σ[(x_{i}  μ)^{2}]^{ } n(
μ)^{2}
The right
side term is shown to be the same as the formula of s^{2 }in Appendix
C. Or you could say Σ[(x_{i}
 μ)^{2}]^{ } n(
μ)^{2} = Σ[(x_{i}
 )^{2}]
is proven in Appendix C.
Step 5)
E(ns^{2}) = nσ^{2 } n(
μ)^{2}
Replace
Σ[(x_{i}  μ)^{2}] with nσ^{2}.
Why? By definition σ^{2} = E[(x_{i}
 μ)^{2}], which equals Σ[(x_{i}  μ)^{2}]/n
when the probability of each x_{i} is identical, which is the case as
we are assuming each sample has the same probability.
So if
σ^{2 = }Σ[(x_{i}  μ)^{2}]/n then
nσ^{2 }= Σ[(x_{i}  μ)^{2}] so we are
able to replace this term in the equation.
Step 6)
E(ns^{2}) = nσ^{2 } Σ[(_{
} μ)^{2}]
Since n(
μ)^{2} = Σ[(_{
} μ)^{2}]
Step 7) E(ns^{2}) = nσ^{2 } n_{}
Replace Σ[(_{ } μ)^{2}] with n_{}
Why? If σ^{2} = E[(x_{i }
μ)^{2}] then _{ }= E[(_{
} μ)^{2}]
If σ^{2}
= Σ[(x_{i } μ)^{2}]/n then _{ }= Σ[(_{
} μ)^{2}]/n (when the
probability of each item is equal.)
If _{ }= Σ[(_{
} μ)^{2}]/n then multiply both sides by n to get n_{
}= Σ[(_{
} μ)^{2}]
Step 8)
E(ns^{2}) = nσ^{2 } σ^{2}
Replace n_{ }with _{}
Why? We have seen previously that _{} = σ^{2}/ n. That is, the variance of the sample mean is
equal to the variance of the original probability distribution divided by n,
where n is the sample size.
Since _{} = σ^{2}/n then σ^{2} = n_{}
Step 9)
E(ns^{2}) = (n1) σ^{2}
Factor
out the n1.
Step 10)
E(s^{2}) = (n1) σ^{2}/ n
Divide both
sides by n.
Therefore the expected value of s^{2 }is not σ^{2}. To get an unbiased estimator use this:
s^{2} = Σ[(x_{i}
 )^{2}]/(n1)
instead since E(Σ[(x_{i}  )^{2}]/(n1))
= σ^{2}
Appendix
A
Going from
this:
σ^{2}
= Σ[(x_{i}  μ)^{2}] / n
to this:
σ^{2}
= Σ(x_{i}^{2})/n  μ^{2}
Summary
Step 1)
σ^{2} = Σ[(x_{i}  μ)^{2}] / n
Step 2)
nσ^{2} = Σ[(x_{i}  μ)^{2}]
Step 3)
nσ^{2} = Σ[(x_{i}  μ) * (x_{i} 
μ)]
Step 4)
nσ^{2} = Σ[(x_{i}^{2} μx_{i}_{
} μx_{i} + μ^{2})]
Step 5)
nσ^{2} = Σ[(x_{i}^{2}  2μx_{i}
+ μ^{2})]
Step 6)
nσ^{2} = Σ(x_{i}^{2})  Σ(2μx_{i})
+ Σ (μ^{2})
Step 7)
nσ^{2} = Σ(x_{i}^{2})  [2μ *Σ(x_{i})]
+ Σ (μ^{2})
Step 8)
nσ^{2} = Σ(x_{i}^{2})  [2μ *Σ(x_{i})]
+ nμ^{2}
Step 9)
nσ^{2} = Σ(x_{i}^{2})  [2μ * nμ)] + nμ^{2}
Step 10) nσ^{2}
= Σ(x_{i}^{2})  2nμ^{2} + nμ^{2}
Step 11)
nσ^{2} = Σ(x_{i}^{2})  nμ^{2}
Step 12)
σ^{2} = Σ(x_{i}^{2})/n  μ^{2}
Details
Step 1)
σ^{2} = Σ[(x_{i}  μ)^{2}] / n
This is
just the normal formula for variance of a population
Step 2)
nσ^{2} = Σ[(x_{i}  μ)^{2}]
Multiply
both sides by n. The only reason to do
this is to make it easier to read. Our
last step is to undo this by dividing both sides by n.
Step 3)
nσ^{2} = Σ[(x_{i}  μ) * (x_{i} 
μ)]
Write this
out in a longer form. So instead of
writing a^{2 }write a * a
Step 4)
nσ^{2} = Σ[(x_{i}^{2} μx_{i}_{
} μx_{i} + μ^{2})]
Perform the
multiplication. Remember FOIL (First,
Outer, Inner, Last)? So instead of
writing (a  b) * (a  b), write: (a^{2}  2ab + b^{2})
Step 5)
nσ^{2} = Σ[(x_{i}^{2}  2μx_{i}
+ μ^{2})]
This
completes the factoring step begun in Step 4.
Step 6)
nσ^{2} = Σ(x_{i}^{2})  Σ(2μx_{i})
+ Σ (μ^{2})
Move the
summation signs next to each value. You
can do this because you are just adding or subtracting each term. You couldn't do this if you were multiplying
or dividing each term.
Step 7)
nσ^{2} = Σ(x_{i}^{2})  [2μ *Σ(x_{i})]
+ Σ(μ^{2})
Move the
2μ to the outside of the summing of the x_{i} terms. Why is this OK? Remember that the Σ symbol means sum all
of the terms for all x_{i} for "i" equals 1 to n. Since the 2 and the μ are constant and
therefore unaffected by particular value of the x_{i}, you can move
them to the outside of the summation notation.
You wind up multiplying once at the end of the summing rather than
multiplying for each loop in the summing process, but you get the same result.
Step 8)
nσ^{2} = Σ(x_{i}^{2})  [2μ *Σ(x_{i})]
+ nμ^{2}
Σ(μ^{2})
becomes nμ^{2}. Why? Remember that the Σ symbol specifically
means to sum for all values of x_{i }from "i" equals 1 to n
. Since μ is a constant across all
values of x_{i}, you can just multiply μ^{2} by n to get
the same result as you would get by summing it n times.
Step 9)
nσ^{2} = Σ(x_{i}^{2})  [2μ * nμ)] + nμ^{2}
Σ(μ)
becomes nμ.
Why? Same logic
as for Step 8.
Step 10) nσ^{2} = Σ(x_{i}^{2})  2nμ^{2} + nμ^{2}
This is
just rewriting the formula to make the middle term easier to read.
Step 11) nσ^{2} = Σ(x_{i}^{2})  nμ^{2}
Add the
last two terms on the right side of the equation.
Step 12)
σ^{2} = Σ(x_{i}^{2})/n  μ^{2}
Divide both
sides by n to get the result we wanted, the alternate formula for σ^{2}.
Appendix
B
This: [(x_{1}  )^{2} + (x_{2}  )^{2}] / 2
becomes: (½x_{1}  ½x_{2})^{2}
^{ }
Summary
Step 1) [(x_{1}  )^{2} + (x_{2}  )^{2}] / n
Step 2) [(x_{1}(x_{1}+x_{2})/n)^{2}
+ (x_{2}(x_{1}+x_{2})/n)^{2}] / n
Why? = (x_{1} + x_{2 }) / n
Step 3) [(x_{1}(x_{1}+x_{2})/2)^{2}
+ (x_{2}(x_{1}+x_{2})/2)^{2}] / 2
Step 4) [(x_{1 } (x_{1}/2)
 (x_{2}/2))^{2} + (x_{2 } (x_{1}/2)  (x_{2}/2))^{2}
Step 5) [(½x_{1}(x_{2}/2))^{2}+(½x_{2}(x_{1}/2))^{2}]/2
Step 6) [(½x_{1}½x_{2}))^{2}+(½x_{2}½x_{1}))^{2}]/2
Step 7) [(½x_{1}½x_{2})
* (½x_{1}½x_{2}) + (½x_{2}½x_{1}) * (½x_{2}½x_{1})]/2
(a  b) * (a  b) = a^{2 } ba  ba + b^{2}
Step 8) [((½x_{1})^{2}
 (½ * ½ * x_{1}*x_{2})  (½ * ½ * x_{1}*x_{2})
+ (½x_{2})^{2}) +
((½x_{2})^{2}  (½ *
½ * x_{2}*x_{1})  (½ * ½ * x_{2}*x_{1}) + (½x_{1})^{2})]
/ 2
Step 9) [((½x_{1})^{2}
 (1/4x_{1}x_{2})  (1/4x_{1}x_{2}) + (½x_{2})^{2})
+
((½x_{2})^{2}  (1/4x_{2}x_{1})
 (1/4x_{2}x_{1}) + (½x_{1})^{2})] / 2
Step 10) [(½x_{1})^{2}
+ (½x_{2})^{2} + (½x_{2})^{2} + (½x_{1})^{2
} (x_{1}x_{2})] / 2
Step 11) [¼x_{1}^{2} + ¼x_{2}^{2}
+ ¼x_{2}^{2}
+ ¼x_{1}^{2
} (x_{1}x_{2})] / 2
Step 12) [1/2x_{1}^{2}
+ 1/2x_{2}^{2}  (x_{1}x_{2})] / 2
Step 13) (¼x_{1}^{2}  ½x_{1}x_{2} + ¼x_{2}^{2})^{}
Step 14) (¼x_{1}^{2}  ¼x_{1}x_{2}  ¼x_{1}x_{2} + ¼x_{2}^{2})^{}
Step 15) (½x_{1}  ½x_{2}) * (½x_{1}  ½x_{2})^{}
Step 16) (½x_{1}  ½x_{2})^{2}
With n = 3
you get:
1) s^{2} = [(x_{1}  )^{2} + (x_{2}  )^{2} + (x_{3}  )^{2}] / n
2) s^{2} = [(x_{1}(x_{1}+x_{2}+x_{3})/n)^{2}
+ (x_{2}(x_{1}+x_{2}+x_{3})/n)^{2} +
(x_{3}(x_{1}+x_{2}+x_{3})^{2}]/ n)^{2}]/n
(Since = (x_{1} + x_{2 }+_{ }x_{3})
/ n)
3) s^{2} = [(x_{1}(x_{1}+x_{2}+x_{3})/3)^{2}
+ (x_{2}(x_{1}+x_{2}+x_{3})/3)^{2} +
(x_{3}(x_{1}+x_{2}+x_{3})^{2}]/3)^{2}]/3
5) [(x_{1 } (x_{1}/3)
 (x_{2}/3)  (x_{3}/3))^{2} + (x_{2 } (x_{1}/3)
 (x_{2}/3)  (x_{3}/3))^{2} +
(x_{3 } (x_{1}/3)  (x_{2}/3)
 (x_{3}/3))^{2}] / 3
6) [(2/3x_{1}(x_{2}/3)(x_{3}/3))^{2}+(2/3x_{2}(x_{1}/3)(x_{3}/3))^{2}+(2/3x_{3}(x_{1}/3)(x_{2}/3))^{2}]/n
7) The above formula can be reduced
further (but not here due to space constraints.)
Appendix
C
This: [
Σ(x_{i } μ)^{2}  n(_{}
μ)^{ 2 }] / n
Is equivalent to this: [ Σ(x_{i}^{2})_{ } n_{}^{2}] / n^{}
Here are
the steps to go from one to the other:
Step 1) s^{2}
= [ Σ(x_{i } μ)^{2}  n(_{}
μ)^{ 2 }] / n
Step 2) s^{2}
= [ Σ[(x_{i } μ) * (x_{i } μ)]  n[(_{}
μ) * (_{}
μ)]^{ }] / n
Step 3) s^{2}
= [ Σ(x_{i}^{2}_{ } 2μx_{i }+ μ^{2})
 n(_{}^{2}
 2μ_{}
+ μ^{2})^{ }] / n
Step 4) s^{2}
= [ Σ(x_{i}^{2})_{ } Σ(2μx_{i})_{
}+ Σ(μ^{2})  n_{}^{2}
+ 2nμ_{}
 nμ^{2 }] / n
Step 5) s^{2}
= [ Σ(x_{i}^{2})_{ } 2μΣ(x_{i})_{
}+ nμ^{2}  n_{}^{2}
+ 2nμ_{}
 nμ^{2 }] / n
Step 6) s^{2}
= [ Σ(x_{i}^{2})_{ } 2μΣ(x_{i})_{
} n_{}^{2}
+ 2nμ_{}
] / n
Step 7) s^{2}
= [ Σ(x_{i}^{2})_{ } 2μn_{}
_{ } n_{}^{2}
+ 2nμ_{}
] / n
Step 8) s^{2}
= [ Σ(x_{i}^{2})_{ } n_{}^{2}]
/ n