Normal distribution
Template:Probability distribution\; \exp\left(-\frac{\left(x-\mu\right)^2}{2\sigma^2} \right) \!</math>|
cdf =<math>\frac12 \left(1+\mathrm{erf}\,\frac{x-\mu}{\sigma\sqrt2}\right) \!</math>| mean =<math>\mu</math>| median =<math>\mu</math>| mode =<math>\mu</math>| variance =<math>\sigma^2</math>| skewness =0| kurtosis =0| entropy =<math>\ln\left(\sigma\sqrt{2\,\pi\,e}\right)\!</math>| mgf =<math>M_X(t)= \exp\left(\mu\,t+\frac{\sigma^2 t^2}{2}\right)</math>| char =<math>\chi_X(t)=\exp\left(\mu\,i\,t-\frac{\sigma^2 t^2}{2}\right)</math>|
}}
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]
The normal distribution, also called the Gaussian distribution, is an important family of continuous probability distributions, applicable in many fields. Each member of the family may be defined by two parameters, location and scale: the mean ("average", μ) and variance (standard deviation squared) σ2, respectively. The standard normal distribution is the normal distribution with a mean of zero and a variance of one (the red curves in the plots to the right). Carl Friedrich Gauss became associated with this set of distributions when he analyzed astronomical data using them,[1] and defined the equation of its probability density function. It is often called the bell curve because the graph of its probability density resembles a bell.
The importance of the normal distribution as a model of quantitative phenomena in the natural and behavioral sciences is due to the central limit theorem. Many psychological measurements and physical phenomena (like noise) can be approximated well by the normal distribution. While the mechanisms underlying these phenomena are often unknown, the use of the normal model can be theoretically justified by assuming that many small, independent effects are additively contributing to each observation.
The normal distribution also arises in many areas of statistics. For example, the sampling distribution of the sample mean is approximately normal, even if the distribution of the population from which the sample is taken is not normal. In addition, the normal distribution maximizes information entropy among all distributions with known mean and variance, which makes it the natural choice of underlying distribution for data summarized in terms of sample mean and variance. The normal distribution is the most widely used family of distributions in statistics and many statistical tests are based on the assumption of normality. In probability theory, normal distributions arise as the limiting distributions of several continuous and discrete families of distributions.
History
The normal distribution was first introduced by Abraham de Moivre in an article in 1733, which was reprinted in the second edition of his The Doctrine of Chances, 1738 in the context of approximating certain binomial distributions for large n. His result was extended by Laplace in his book Analytical Theory of Probabilities (1812), and is now called the theorem of de Moivre-Laplace.
Laplace used the normal distribution in the analysis of errors of experiments. The important method of least squares was introduced by Legendre in 1805. Gauss, who claimed to have used the method since 1794, justified it rigorously in 1809 by assuming a normal distribution of the errors.
The name "bell curve" goes back to Jouffret who first used the term "bell surface" in 1872 for a bivariate normal with independent components. The name "normal distribution" was coined independently by Charles S. Peirce, Francis Galton and Wilhelm Lexis around 1875. Despite this terminology, other probability distributions may be more appropriate in some contexts; see the discussion of occurrence, below.
Characterization
There are various ways to characterize a probability distribution. The most visual is the probability density function (PDF). Equivalent ways are the cumulative distribution function, the moments, the cumulants, the characteristic function, the moment-generating function, the cumulant-generating function, and Maxwell's theorem. See probability distribution for a discussion.
To indicate that a real-valued random variable X is normally distributed with mean μ and variance σ² ≥ 0, we write
- <math>X \sim N(\mu, \sigma^2).\,\!</math>
While it is certainly useful for certain limit theorems (e.g. asymptotic normality of estimators) and for the theory of Gaussian processes to consider the probability distribution concentrated at μ (see Dirac measure) as a normal distribution with mean μ and variance σ² = 0, this degenerate case is often excluded from the considerations because no density with respect to the Lebesgue measure exists.
The normal distribution may also be parameterized using a precision parameter τ, defined as the reciprocal of σ². This parameterization has an advantage in numerical applications where σ² is very close to zero and is more convenient to work with in analysis as τ is a natural parameter of the normal distribution.
Probability density function
The continuous probability density function of the normal distribution is the Gaussian function
- <math>\varphi_{\mu,\sigma^2}(x) = \frac{1}{\sigma\sqrt{2\pi}} \,e^{ -\frac{(x- \mu)^2}{2\sigma^2}} = \frac{1}{\sigma} \varphi\left(\frac{x - \mu}{\sigma}\right),\quad x\in\mathbb{R},</math>
where σ > 0 is the standard deviation, the real parameter μ is the expected value, and
- <math>\varphi(x)=\varphi_{0,1}(x)=\frac{1}{\sqrt{2\pi\,}} \, e^{-\frac{x^2}{2}},\quad x\in\mathbb{R},</math>
is the density function of the "standard" normal distribution: i.e., the normal distribution with μ = 0 and σ = 1. The integral of <math>\varphi_{\mu,\sigma^2}</math> over the real line is equal to one as shown in the Gaussian integral article.
As a Gaussian function with the denominator of the exponent equal to 2, the standard normal density function <math>\scriptstyle\varphi</math> is an eigenfunction of the Fourier transform.
The probability density function has notable properties including:
- symmetry about its mean μ
- the mode and median both equal the mean μ
- the inflection points of the curve occur one standard deviation away from the mean, i.e. at μ − σ and μ + σ.
Cumulative distribution function
The cumulative distribution function (cdf) of a probability distribution, evaluated at a number (lower-case) x, is the probability of the event that a random variable (capital) X with that distribution is less than or equal to x. The cumulative distribution function of the normal distribution is expressed in terms of the density function as follows:
- <math> \begin{align}
\Phi_{\mu,\sigma^2}(x) &{}=\int_{-\infty}^x\varphi_{\mu,\sigma^2}(u)\,du\\ &{}=\frac{1}{\sigma\sqrt{2\pi}} \int_{-\infty}^x
\exp \Bigl( -\frac{(u - \mu)^2}{2\sigma^2}
\ \Bigr)\, du \\ &{}= \Phi\Bigl(\frac{x-\mu}{\sigma}\Bigr),\quad x\in\mathbb{R}, \end{align} </math>
where the standard normal cdf, Φ, is just the general cdf evaluated with μ = 0 and σ = 1:
- <math>
\Phi(x) = \Phi_{0,1}(x) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^x \exp\Bigl(-\frac{u^2}{2}\Bigr) \, du, \quad x\in\mathbb{R}. </math>
The standard normal cdf can be expressed in terms of a special function called the error function, as
- <math>
\Phi(x) =\frac{1}{2} \Bigl[ 1 + \operatorname{erf} \Bigl( \frac{x}{\sqrt{2}} \Bigr) \Bigr], \quad x\in\mathbb{R}, </math>
and the cdf itself can hence be expressed as
- <math>
\Phi_{\mu,\sigma^2}(x) =\frac{1}{2} \Bigl[ 1 + \operatorname{erf} \Bigl( \frac{x-\mu}{\sigma\sqrt{2}} \Bigr) \Bigr], \quad x\in\mathbb{R}. </math>
The complement of the standard normal cdf, <math>1 - \Phi(x)</math>, is often denoted <math>Q(x)</math>, and is sometimes referred to simply as the Q-function, especially in engineering texts.[2][3] This represents the tail probability of the Gaussian distribution. Other definitions of the Q-function, all of which are simple transformations of <math>\Phi</math>, are also used occasionally.[4]
The inverse standard normal cumulative distribution function, or quantile function, can be expressed in terms of the inverse error function:
- <math>
\Phi^{-1}(p) = \sqrt2 \;\operatorname{erf}^{-1} (2p - 1), \quad p\in(0,1), </math>
and the inverse cumulative distribution function can hence be expressed as
- <math>
\Phi_{\mu,\sigma^2}^{-1}(p) = \mu + \sigma\Phi^{-1}(p) = \mu + \sigma\sqrt2 \; \operatorname{erf}^{-1}(2p - 1), \quad p\in(0,1). </math>
This quantile function is sometimes called the probit function. There is no elementary primitive for the probit function. This is not to say merely that none is known, but rather that the non-existence of such an elementary primitive has been proved. Several accurate methods exist for approximating the quantile function for the normal distribution - see quantile function for a discussion and references.
The values Φ(x) may be approximated very accurately by a variety of methods, such as numerical integration, Taylor series, asymptotic series and continued fractions.
Strict lower and upper bounds for the cdf
For large x the standard normal cdf <math>\scriptstyle\Phi(x)</math> is close to 1 and <math>\scriptstyle\Phi(-x)\,{=}\,1\,{-}\,\Phi(x)</math> is close to 0. The elementary bounds
- <math>
\frac{x}{1+x^2}\varphi(x)<1-\Phi(x)<\frac{\varphi(x)}{x}, \qquad x>0, </math>
in terms of the density <math>\scriptstyle\varphi</math> are useful.
Using the substitution v = u²/2, the upper bound is derived as follows:
- <math>
\begin{align} 1-\Phi(x) &=\int_x^\infty\varphi(u)\,du\\ &<\int_x^\infty\frac ux\varphi(u)\,du =\int_{x^2/2}^\infty\frac{e^{-v}}{x\sqrt{2\pi}}\,dv =-\biggl.\frac{e^{-v}}{x\sqrt{2\pi}}\biggr|_{x^2/2}^\infty =\frac{\varphi(x)}{x}. \end{align} </math>
Similarly, using <math>\scriptstyle\varphi'(u)\,{=}\,-u\,\varphi(u)</math> and the quotient rule,
- <math>
\begin{align} \Bigl(1+\frac1{x^2}\Bigr)(1-\Phi(x)) &=\int_x^\infty \Bigl(1+\frac1{x^2}\Bigr)\varphi(u)\,du\\ &>\int_x^\infty \Bigl(1+\frac1{u^2}\Bigr)\varphi(u)\,du =-\biggl.\frac{\varphi(u)}u\biggr|_x^\infty =\frac{\varphi(x)}x. \end{align} </math>
Solving for <math>\scriptstyle 1\,{-}\,\Phi(x)\,</math> provides the lower bound.
Generating functions
Moment generating function
The moment generating function is defined as the expected value of exp(tX). For a normal distribution, the moment generating function is
- <math>
\begin{align} M_X(t) & {} = \mathrm{E} \left[ \exp{(tX)} \right] \\ & {} = \int_{-\infty}^{\infty} \frac{1}{\sigma \sqrt{2\pi} } \exp{\left( -\frac{(x - \mu)^2}{2 \sigma^2} \right)} \exp{(tx)} \, dx \\ & {} = \exp{ \left( \mu t + \frac{\sigma^2 t^2}{2} \right)} \end{align} </math>
as can be seen by completing the square in the exponent.
Cumulant generating function
The cumulant generating function is the logarithm of the moment generating function: g(t) = μt + σ²t²/2. Since this is a quadratic polynomial in t, only the first two cumulants are nonzero.
Characteristic function
The characteristic function is defined as the expected value of <math>\exp (i t X)</math>, where <math>i</math> is the imaginary unit. So the characteristic function is obtained by replacing <math>t</math> with <math>it</math> in the moment-generating function.
For a normal distribution, the characteristic function is
- <math>\begin{align}
\chi_X(t;\mu,\sigma) &{} = M_X(i t) = \mathrm{E} \left[ \exp(i t X) \right] \\ &{}= \int_{-\infty}^{\infty}
\frac{1}{\sigma \sqrt{2\pi}} \exp \left(- \frac{(x - \mu)^2}{2\sigma^2} \right) \exp(i t x)
\, dx \\ &{}= \exp \left(
i \mu t - \frac{\sigma^2 t^2}{2}
\right). \end{align} </math>
Properties
Some properties of the normal distribution:
- If <math>X \sim N(\mu, \sigma^2)</math> and <math>a</math> and <math>b</math> are real numbers, then <math>a X + b \sim N(a \mu + b, (a \sigma)^2)</math> (see expected value and variance).
- If <math>X \sim N(\mu_X, \sigma^2_X)</math> and <math>Y \sim N(\mu_Y, \sigma^2_Y)</math> are independent normal random variables, then:
- Their sum is normally distributed with <math>U = X + Y \sim N(\mu_X + \mu_Y, \sigma^2_X + \sigma^2_Y)</math> (proof). Interestingly, the converse holds: if two independent random variables have a normally-distributed sum, then they must be normal themselves — this is known as Cramér's theorem.
- Their difference is normally distributed with <math>V = X - Y \sim N(\mu_X - \mu_Y, \sigma^2_X + \sigma^2_Y)</math>.
- If the variances of X and Y are equal, then U and V are independent of each other.
- The Kullback-Leibler divergence, <math>D_{\rm KL}( X \| Y ) =
{ 1 \over 2 } \left( \log \left( { \sigma^2_Y \over \sigma^2_X } \right) + \frac{\sigma^2_X}{\sigma^2_Y} + \frac{\left(\mu_Y - \mu_X\right)^2}{\sigma^2_Y} - 1\right). </math>
- If <math>X \sim N(0, \sigma^2_X)</math> and <math>Y \sim N(0, \sigma^2_Y)</math> are independent normal random variables, then:
- Their product <math>X Y</math> follows a distribution with density <math>p</math> given by
- <math>p(z) = \frac{1}{\pi\,\sigma_X\,\sigma_Y} \; K_0\left(\frac{|z|}{\sigma_X\,\sigma_Y}\right),</math> where <math>K_0</math> is a modified Bessel function of the second kind.
- Their ratio follows a Cauchy distribution with <math>X/Y \sim \mathrm{Cauchy}(0, \sigma_X/\sigma_Y)</math>. Thus the Cauchy distribution is a special kind of ratio distribution.
- Their product <math>X Y</math> follows a distribution with density <math>p</math> given by
- If <math>X_1, \dots, X_n</math> are independent standard normal variables, then <math>X_1^2 + \cdots + X_n^2</math> has a chi-square distribution with n degrees of freedom.
Standardizing normal random variables
As a consequence of Property 1, it is possible to relate all normal random variables to the standard normal.
If <math>X</math> ~ <math>N(\mu, \sigma^2)</math>, then
- <math>Z = \frac{X - \mu}{\sigma} \!</math>
is a standard normal random variable: <math>Z</math> ~ <math>N(0,1)</math>. An important consequence is that the cdf of a general normal distribution is therefore
- <math>\Pr(X \le x)
= \Phi \left(
\frac{x-\mu}{\sigma}
\right) = \frac{1}{2} \left(
1 + \operatorname{erf} \left( \frac{x-\mu}{\sigma\sqrt{2}} \right)
\right) . </math>
Conversely, if <math>Z</math> is a standard normal distribution, <math>Z</math> ~ <math>N(0,1)</math>, then
- <math>X = \sigma Z + \mu</math>
is a normal random variable with mean <math>\mu</math> and variance <math>\sigma^2</math>.
The standard normal distribution has been tabulated (usually in the form of value of the cumulative distribution function Φ), and the other normal distributions are the simple transformations, as described above, of the standard one. Therefore, one can use tabulated values of the cdf of the standard normal distribution to find values of the cdf of a general normal distribution.
Moments
The first few moments of the normal distribution are:
Number | Raw moment | Central moment | Cumulant |
---|---|---|---|
0 | 1 | 1 | |
1 | <math>\mu</math> | 0 | <math>\mu</math> |
2 | <math>\mu^2 + \sigma^2</math> | <math>\sigma^2</math> | <math>\sigma^2</math> |
3 | <math>\mu^3 + 3\mu\sigma^2</math> | 0 | 0 |
4 | <math>\mu^4 + 6 \mu^2 \sigma^2 + 3 \sigma^4</math> | <math>3 \sigma^4</math> | 0 |
5 | <math>\mu^5 + 10 \mu^3 \sigma^2 + 15 \mu \sigma^4</math> | 0 | 0 |
6 | <math>\mu^6 + 15 \mu^4 \sigma^2 + 45 \mu^2 \sigma^4 + 15 \sigma^6 </math> | <math> 15 \sigma^6 </math> | 0 |
7 | <math>\mu^7 + 21 \mu^5 \sigma^2 + 105 \mu^3 \sigma^4 + 105 \mu \sigma^6 </math> | 0 | 0 |
8 | <math>\mu^8 + 28 \mu^6 \sigma^2 + 210 \mu^4 \sigma^4 + 420 \mu^2 \sigma^6 + 105 \sigma^8 </math> | <math> 105 \sigma^8 </math> | 0 |
All cumulants of the normal distribution beyond the second are zero.
Higher central moments (of order <math>2k</math> with <math> \mu=0</math>) can be obtained using the formula
<math> E\left[x^{2k}\right]=\frac{(2k)!}{2^k k!} \sigma^{2k}. </math>
Generating values for normal random variables
For computer simulations, it is often useful to generate values that have a normal distribution. There are several methods and the most basic is to invert the standard normal cdf. More efficient methods are also known, one such method being the Box-Muller transform. An even faster algorithm is the ziggurat algorithm.
The Box-Muller algorithm says that, if you have two numbers a and b uniformly distributed on (0, 1], (e.g. the output from a random number generator), then two standard normally distributed random variables are c and d, where:
- <math>c = \sqrt{- 2 \ln a} \cdot \cos(2 \pi b) </math>
- <math>d = \sqrt{- 2 \ln a} \cdot \sin(2 \pi b) </math>
This is because the chi-square distribution with two degrees of freedom (see property 4 above) is an easily-generated exponential random variable.
The central limit theorem
Under certain conditions (such as being independent and identically-distributed with finite variance), the sum of a large number of random variables is approximately normally distributed — this is the central limit theorem.
The practical importance of the central limit theorem is that the normal cumulative distribution function can be used as an approximation to some other cumulative distribution functions, for example:
- A binomial distribution with parameters n and p is approximately normal for large n and p not too close to 1 or 0 (some books recommend using this approximation only if np and n(1 − p) are both at least 5; in this case, a continuity correction should be applied).
The approximating normal distribution has parameters μ = np, σ2 = np(1 − p).
- A Poisson distribution with parameter λ is approximately normal for large λ.
The approximating normal distribution has parameters μ = σ2 = λ.
Whether these approximations are sufficiently accurate depends on the purpose for which they are needed, and the rate of convergence to the normal distribution. It is typically the case that such approximations are less accurate in the tails of the distribution. A general upper bound of the approximation error of the cumulative distribution function is given by the Berry–Esséen theorem.
Infinite divisibility
The normal distributions are infinitely divisible probability distributions: Given a mean μ, a variance σ 2 ≥ 0, and a natural number n, the sum X1 + . . . + Xn of n independent random variables
- <math>X_1,X_2,\dots,X_n \sim N(\mu/n, \sigma^2\!/n)\,</math>
has this specified normal distribution (to verify this, use characteristic functions or convolution and mathematical induction).
Stability
The normal distributions are strictly stable probability distributions.
Standard deviation and confidence intervals
About 68% of values drawn from a normal distribution are within one standard deviation σ > 0 away from the mean μ; about 95% of the values are within two standard deviations and about 99.7% lie within three standard deviations. This is known as the "68-95-99.7 rule" or the "empirical rule."
To be more precise, the area under the bell curve between μ − nσ and μ + nσ in terms of the cumulative normal distribution function is given by
- <math>\begin{align}&\Phi_{\mu,\sigma^2}(\mu+n\sigma)-\Phi_{\mu,\sigma^2}(\mu-n\sigma)\\
&=\Phi(n)-\Phi(-n)=2\Phi(n)-1=\mathrm{erf}\bigl(n/\sqrt{2}\,\bigr),\end{align}</math>
where erf is the error function. To 12 decimal places, the values for the 1-, 2-, up to 6-sigma points are:
<math>n\,</math> | <math>\mathrm{erf}\bigl(n/\sqrt{2}\,\bigr)\,</math> |
---|---|
1 | 0.682689492137 |
2 | 0.954499736104 |
3 | 0.997300203937 |
4 | 0.999936657516 |
5 | 0.999999426697 |
6 | 0.999999998027 |
The next table gives the reverse relation of sigma multiples corresponding to a few often used values for the area under the bell curve. These values are useful to determine (asymptotic) confidence intervals of the specified levels based on normally distributed (or asymptotically normal) estimators:
<math>\mathrm{erf}\bigl(n/\sqrt{2}\,\bigr)</math> | <math>n\,</math> |
---|---|
0.80 | 1.28155 |
0.90 | 1.64485 |
0.95 | 1.95996 |
0.98 | 2.32635 |
0.99 | 2.57583 |
0.995 | 2.80703 |
0.998 | 3.09023 |
0.999 | 3.29052 |
where the value on the left of the table is the proportion of values that will fall within a given interval and n is a multiple of the standard deviation that specifies the width of the interval.
Exponential family form
The Normal distribution is a two-parameter exponential family form with natural parameters μ and 1/σ2, and natural statistics X and X2. The canonical form has parameters <math>{\mu \over \sigma^2}</math> and <math>{1 \over \sigma^2}</math> and sufficient statistics <math>\sum x </math> and <math>-{1 \over 2} \sum x^2 </math>.
Complex Gaussian process
Consider complex Gaussian random variable,
- <math>
Z=X+iY\, </math>
where X and Y are real and independent Gaussian variables with equal variances <math>\scriptstyle \sigma_r^2\,</math>. The pdf of the joint variables is then
- <math>
\frac{1}{2\,\pi\,\sigma_r^2} e^{-(x^2+y^2)/(2 \sigma_r ^2)} </math>
Because <math>\scriptstyle \sigma_z\, =\, \sqrt{2}\sigma_r</math>, the resulting pdf for the complex Gaussian variable Z is
- <math>
\frac{1}{\pi\,\sigma_z^2} e^{-|z|^2/\sigma_z^2}. </math>
Related distributions
- <math>R \sim \mathrm{Rayleigh}(\sigma^2)</math> is a Rayleigh distribution if <math>R = \sqrt{X^2 + Y^2}</math> where <math>X \sim N(0, \sigma^2)</math> and <math>Y \sim N(0, \sigma^2)</math> are two independent normal distributions.
- <math>Y \sim \chi_{\nu}^2</math> is a chi-square distribution with <math>\nu</math> degrees of freedom if <math>Y = \sum_{k=1}^{\nu} X_k^2</math> where <math>X_k \sim N(0,1)</math> for <math>k=1,\dots,\nu</math> and are independent.
- <math>Y \sim \mathrm{Cauchy}(\mu = 0, \theta = 1)</math> is a Cauchy distribution if <math>Y = X_1/X_2</math> for <math>X_1 \sim N(0,1)</math> and <math>X_2 \sim N(0,1)</math> are two independent normal distributions.
- <math>Y \sim \mbox{Log-N}(\mu, \sigma^2)</math> is a log-normal distribution if <math>Y = e^X</math> and <math>X \sim N(\mu, \sigma^2)</math>.
- Relation to Lévy skew alpha-stable distribution: if <math>X\sim \textrm{Levy-S}\alpha\textrm{S}(2,\beta,\sigma/\sqrt{2},\mu)</math> then <math>X \sim N(\mu,\sigma^2)</math>.
- Truncated normal distribution. If <math>X \sim N(\mu, \sigma^2),\!</math> then truncating X below at <math>A</math> and above at <math>B</math> will lead to a random variable with mean <math>E(X)=\mu + \frac{\sigma(\varphi_1-\varphi_2)}{T},\!</math> where <math>T=\Phi\left(\frac{B-\mu}{\sigma}\right)-\Phi\left(\frac{A-\mu}{\sigma}\right), \; \varphi_1 = \varphi\left(\frac{A-\mu}{\sigma}\right), \; \varphi_2 = \varphi\left(\frac{B-\mu}{\sigma}\right)</math> and <math>\varphi</math> is the probability density function of a standard normal random variable.
- If <math>X</math> is a random variable with a normal distribution, and <math>Y=|X|</math>, then <math>Y</math> has a folded normal distribution.
Descriptive and inferential statistics
Scores
Many scores are derived from the normal distribution, including percentile ranks ("percentiles"), normal curve equivalents, stanines, z-scores, and T-scores. Additionally, a number of behavioral statistical procedures are based on the assumption that scores are normally distributed; for example, t-tests and ANOVAs (see below). Bell curve grading assigns relative grades based on a normal distribution of scores.
Normality tests
Normality tests check a given set of data for similarity to the normal distribution. The null hypothesis is that the data set is similar to the normal distribution, therefore a sufficiently small P-value indicates non-normal data.
- Kolmogorov-Smirnov test
- Lilliefors test
- Anderson-Darling test
- Ryan-Joiner test
- Shapiro-Wilk test
- Normal probability plot (rankit plot)
- Jarque-Bera test
Estimation of parameters
Maximum likelihood estimation of parameters
Suppose
- <math>X_1,\dots,X_n</math>
are independent and each is normally distributed with expectation μ and variance σ² > 0. In the language of statisticians, the observed values of these n random variables make up a "sample of size n from a normally distributed population." It is desired to estimate the "population mean" μ and the "population standard deviation" σ, based on the observed values of this sample. The continuous joint probability density function of these n independent random variables is
- <math>\begin{align}f(x_1,\dots,x_n;\mu,\sigma)
&= \prod_{i=1}^n \varphi_{\mu,\sigma^2}(x_i)\\ &=\frac1{(\sigma\sqrt{2\pi})^n}\prod_{i=1}^n \exp\biggl(-{1 \over 2} \Bigl({x_i-\mu \over \sigma}\Bigr)^2\biggr), \quad(x_1,\ldots,x_n)\in\mathbb{R}^n. \end{align} </math>
As a function of μ and σ, the likelihood function based on the observations X1, ..., Xn is
- <math>
L(\mu,\sigma) = \frac C{\sigma^n} \exp\left(-{\sum_{i=1}^n (X_i-\mu)^2 \over 2\sigma^2}\right), \quad\mu\in\mathbb{R},\ \sigma>0, </math>
with some constant C > 0 (which in general would be even allowed to depend on X1, ..., Xn, but will vanish anyway when partial derivatives of the log-likelihood function with respect to the parameters are computed, see below).
In the method of maximum likelihood, the values of μ and σ that maximize the likelihood function are taken as estimates of the population parameters μ and σ.
Usually in maximizing a function of two variables, one might consider partial derivatives. But here we will exploit the fact that the value of μ that maximizes the likelihood function with σ fixed does not depend on σ. Therefore, we can find that value of μ, then substitute it for μ in the likelihood function, and finally find the value of σ that maximizes the resulting expression.
It is evident that the likelihood function is a decreasing function of the sum
- <math>\sum_{i=1}^n (X_i-\mu)^2. \,\!</math>
So we want the value of μ that minimizes this sum. Let
- <math>\overline{X}_n=(X_1+\cdots+X_n)/n</math>
be the "sample mean" based on the n observations. Observe that
- <math>
\begin{align} \sum_{i=1}^n (X_i-\mu)^2 &=\sum_{i=1}^n\bigl((X_i-\overline{X}_n)+(\overline{X}_n-\mu)\bigr)^2\\ &=\sum_{i=1}^n(X_i-\overline{X}_n)^2 + 2(\overline{X}_n-\mu)\underbrace{\sum_{i=1}^n (X_i-\overline{X}_n)}_{=\,0} + \sum_{i=1}^n (\overline{X}_n-\mu)^2\\ &=\sum_{i=1}^n(X_i-\overline{X}_n)^2 + n(\overline{X}_n-\mu)^2. \end{align} </math>
Only the last term depends on μ and it is minimized by
- <math>\widehat{\mu}_n=\overline{X}_n.</math>
That is the maximum-likelihood estimate of μ based on the n observations X1, ..., Xn. When we substitute that estimate for μ into the likelihood function, we get
- <math>L(\overline{X}_n,\sigma) = \frac C{\sigma^n} \exp\biggl(-{\sum_{i=1}^n (X_i-\overline{X}_n)^2 \over 2\sigma^2}\biggr),
\quad\sigma>0.</math>
It is conventional to denote the "log-likelihood function", i.e., the logarithm of the likelihood function, by a lower-case <math>\ell</math>, and we have
- <math>\ell(\overline{X}_n,\sigma)=\log C-n\log\sigma-{\sum_{i=1}^n(X_i-\overline{X}_n)^2 \over 2\sigma^2},
\quad\sigma>0,</math>
and then
- <math>
\begin{align} {\partial \over \partial\sigma}\ell(\overline{X}_n,\sigma) &=-{n \over \sigma} +{\sum_{i=1}^n (X_i-\overline{X}_n)^2 \over \sigma^3}\\ &=-{n \over \sigma^3}\biggl(\sigma^2-{1 \over n}\sum_{i=1}^n (X_i-\overline{X}_n)^2 \biggr), \quad\sigma>0. \end{align} </math>
This derivative is positive, zero, or negative according as σ² is between 0 and
- <math>\hat\sigma_n^2:={1 \over n}\sum_{i=1}^n(X_i-\overline{X}_n)^2,</math>
or equal to that quantity, or greater than that quantity. (If there is just one observation, meaning that n = 1, or if X1 = ... = Xn, which only happens with probability zero, then <math>\hat\sigma{}_n^2=0</math> by this formula, reflecting the fact that in these cases the likelihood function is unbounded as σ decreases to zero.)
Consequently this average of squares of residuals is the maximum-likelihood estimate of σ², and its square root is the maximum-likelihood estimate of σ based on the n observations. This estimator <math>\hat\sigma{}_n^2</math> is biased, but has a smaller mean squared error than the usual unbiased estimator, which is n/(n − 1) times this estimator.
Surprising generalization
The derivation of the maximum-likelihood estimator of the covariance matrix of a multivariate normal distribution is subtle. It involves the spectral theorem and the reason it can be better to view a scalar as the trace of a 1×1 matrix than as a mere scalar. See estimation of covariance matrices.
Unbiased estimation of parameters
The maximum likelihood estimator of the population mean <math>\mu</math> from a sample is an unbiased estimator of the mean, as is the variance when the mean of the population is known a priori. However, if we are faced with a sample and have no knowledge of the mean or the variance of the population from which it is drawn, the unbiased estimator of the variance <math>\sigma^2</math> is:
- <math>
S^2 = \frac{1}{n-1} \sum_{i=1}^n (X_i - \overline{X})^2. </math>
This "sample variance" follows a Gamma distribution if all Xi are independent and identically-distributed:
- <math>
S^2 \sim \operatorname{Gamma}\left(\frac{n-1}{2},\frac{2 \sigma^2}{n-1}\right), </math>
with mean <math>\operatorname{E}(S^2)=\sigma^2</math> and variance <math>\operatorname{Var}(S^2)=2\sigma^4/(n-1)</math>.
Occurrence
Approximately normal distributions occur in many situations, as explained by the central limit theorem. When there is reason to suspect the presence of a large number of small effects acting additively and independently, it is reasonable to assume that observations will be normal. There are statistical methods to empirically test that assumption, for example the Kolmogorov-Smirnov test.
Effects can also act as multiplicative (rather than additive) modifications. In that case, the assumption of normality is not justified, and it is the logarithm of the variable of interest that is normally distributed. The distribution of the directly observed variable is then called log-normal.
Finally, if there is a single external influence which has a large effect on the variable under consideration, the assumption of normality is not justified either. This is true even if, when the external variable is held constant, the resulting marginal distributions are indeed normal. The full distribution will be a superposition of normal variables, which is not in general normal. This is related to the theory of errors (see below).
To summarize, here is a list of situations where approximate normality is sometimes assumed. For a fuller discussion, see below.
- In counting problems (so the central limit theorem includes a discrete-to-continuum approximation) where reproductive random variables are involved, such as
- Binomial random variables, associated to yes/no questions;
- Poisson random variables, associated to rare events;
- In physiological measurements of biological specimens:
- The logarithm of measures of size of living tissue (length, height, skin area, weight);
- The length of inert appendages (hair, claws, nails, teeth) of biological specimens, in the direction of growth; presumably the thickness of tree bark also falls under this category;
- Other physiological measures may be normally distributed, but there is no reason to expect that a priori;
- Measurement errors are often assumed to be normally distributed, and any deviation from normality is considered something which should be explained;
- Financial variables
- Changes in the logarithm of exchange rates, price indices, and stock market indices; these variables behave like compound interest, not like simple interest, and so are multiplicative;
- Other financial variables may be normally distributed, but there is no reason to expect that a priori;
- Light intensity
- The intensity of laser light is normally distributed;
- Thermal light has a Bose-Einstein distribution on very short time scales, and a normal distribution on longer timescales due to the central limit theorem.
Of relevance to biology and economics is the fact that complex systems tend to display power laws rather than normality.
Photon counting
Light intensity from a single source varies with time, as thermal fluctuations can be observed if the light is analyzed at sufficiently high time resolution. The intensity is usually assumed to be normally distributed. Quantum mechanics interprets measurements of light intensity as photon counting. The natural assumption in this setting is the Poisson distribution. When light intensity is integrated over times longer than the coherence time and is large, the Poisson-to-normal limit is appropriate.
Measurement errors
Normality is the central assumption of the mathematical theory of errors. Similarly, in statistical model-fitting, an indicator of goodness of fit is that the residuals (as the errors are called in that setting) be independent and normally distributed. The assumption is that any deviation from normality needs to be explained. In that sense, both in model-fitting and in the theory of errors, normality is the only observation that need not be explained, being expected. However, if the original data are not normally distributed (for instance if they follow a Cauchy distribution), then the residuals will also not be normally distributed. This fact is usually ignored in practice.
Repeated measurements of the same quantity are expected to yield results which are clustered around a particular value. If all major sources of errors have been taken into account, it is assumed that the remaining error must be the result of a large number of very small additive effects, and hence normal. Deviations from normality are interpreted as indications of systematic errors which have not been taken into account. Whether this assumption is valid is debatable. A famous and oft-quoted remark attributed to Gabriel Lippmann says: "Everyone believes in the [normal] law of errors: the mathematicians, because they think it is an experimental fact; and the experimenters, because they suppose it is a theorem of mathematics."
Physical characteristics of biological specimens
The sizes of full-grown animals is approximately lognormal. The evidence and an explanation based on models of growth was first published in the 1932 book Problems of Relative Growth by Julian Huxley.
Differences in size due to sexual dimorphism, or other polymorphisms like the worker/soldier/queen division in social insects, further make the distribution of sizes deviate from lognormality.
The assumption that linear size of biological specimens is normal (rather than lognormal) leads to a non-normal distribution of weight (since weight or volume is roughly proportional to the 2nd or 3rd power of length, and Gaussian distributions are only preserved by linear transformations), and conversely assuming that weight is normal leads to non-normal lengths. This is a problem, because there is no a priori reason why one of length, or body mass, and not the other, should be normally distributed. Lognormal distributions, on the other hand, are preserved by powers so the "problem" goes away if lognormality is assumed.
On the other hand, there are some biological measures where normality is assumed, such as blood pressure of adult humans. This is supposed to be normally distributed, but only after separating males and females into different populations (each of which is normally distributed).
Financial variables
Already in 1900 Louis Bachelier proposed representing price changes of stocks using the normal distribution. This approach has since been modified slightly. Because of the exponential nature of inflation, financial indicators such as stock values and commodity prices exhibit "multiplicative behavior". As such, their periodic changes (e.g., yearly changes) are not normal, but rather lognormal - i.e. returns as opposed to values are normally distributed. This is still the most commonly used hypothesis in finance, in particular in asset pricing. Corrections to this model seem to be necessary, as has been pointed out for instance by Benoît Mandelbrot, the popularizer of fractals, who observed that the changes in logarithm over short periods (such as a day) are approximated well by distributions that do not have a finite variance, and therefore the central limit theorem does not apply. Rather, the sum of many such changes gives log-Levy distributions.
Distribution in testing and intelligence
Sometimes, the difficulty and number of questions on an IQ test is selected in order to yield normal distributed results. Or else, the raw test scores are converted to IQ values by fitting them to the normal distribution. In either case, it is the deliberate result of test construction or score interpretation that leads to IQ scores being normally distributed for the majority of the population. However, the question whether intelligence itself is normally distributed is more involved, because intelligence is a latent variable, therefore its distribution cannot be observed directly.
Diffusion equation
The probability density function of the normal distribution is closely related to the (homogeneous and isotropic) diffusion equation and therefore also to the heat equation. This partial differential equation describes the time evolution of a mass-density function under diffusion. In particular, the probability density function
- <math>\varphi_{0,t}(x) = \frac{1}{\sqrt{2\pi t\,}}\exp\left(-\frac{x^2}{2t}\right), </math>
for the normal distribution with expected value 0 and variance t satisfies the diffusion equation:
- <math> \frac{\partial}{\partial t} \varphi_{0,t}(x) = \frac{1}{2} \frac{\partial^2}{\partial x^2} \varphi_{0,t}(x). </math>
If the mass-density at time t = 0 is given by a Dirac delta, which essentially means that all mass is initially concentrated in a single point, then the mass-density function at time t will have the form of the normal probability density function with variance linearly growing with t. This connection is no coincidence: diffusion is due to Brownian motion which is mathematically described by a Wiener process, and such a process at time t will also result in a normal distribution with variance linearly growing with t.
More generally, if the initial mass-density is given by a function φ(x), then the mass-density at time t will be given by the convolution of φ and a normal probability density function.
Numerical approximations of the normal distribution and its cdf
The normal distribution is widely used in scientific and statistical computing. Therefore, it has been implemented in various ways.
The GNU Scientific Library calculates values of the standard normal cdf using piecewise approximations by rational functions. Another approximation method uses third-degree polynomials on intervals [2]. The article on the bc programming language gives an example of how to compute the cdf in Gnu bc.
Generation of deviates from the unit normal is normally done using the Box-Muller method of choosing an angle uniformly and a radius exponential and then transforming to (normally distributed) x and y coordinates. If log, cos or sin are expensive then a simple alternative is to simply sum 12 uniform (0,1) deviates and subtract 6 (half of 12). This is quite usable in many applications. The sum over 12 values is chosen as this gives a variance of exactly one. The result is limited to the range (-6,6) and has a density which is a 12-section eleventh-order polynomial approximation to the normal distribution [5].
A method that is much faster than the Box-Muller transform but which is still exact is the so called Ziggurat algorithm developed by George Marsaglia. In about 97% of all cases it uses only two random numbers, one random integer and one random uniform, one multiplication and an if-test. Only in 3% of the cases where the combination of those two falls outside the "core of the ziggurat" a kind of rejection sampling using logarithms, exponentials and more uniform random numbers has to be employed.
There is also some investigation into the connection between the fast Hadamard transform and the normal distribution since the transform employs just addition and subtraction and by the central limit theorem random numbers from almost any distribution will be transformed into the normal distribution. In this regard a series of Hadamard transforms can be combined with random permutations to turn arbitrary data sets into a normally distributed data.
In Microsoft Excel the function NORMSDIST() calculates the cdf of the standard normal distribution, and NORMSINV() calculates its inverse function. Therefore, NORMSINV(RAND()) is an accurate but slow way of generating values from the standard normal distribution, using the principle of Inverse transform sampling.
See also
- A typical normal distribution table
- Behrens-Fisher problem
- Bell curve grading
- Data transformation (statistics) - simple techniques to transform data into normal distribution
- Erdős-Kac theorem, on the occurrence of the normal distribution in number theory
- Gaussian blur, convolution using the normal distribution as a kernel
- Gaussian function
- Gaussian process
- Iannis Xenakis, Gaussian distribution in music.
- Inverse Gaussian distribution
- Lognormal distribution
- Multivariate normal distribution
- Matrix normal distribution
- Normal-gamma distribution
- Normally distributed and uncorrelated does not imply independent (an example of two normally distributed uncorrelated random variables that are not independent; this cannot happen in the presence of joint normality)
- Probit function
- Sample size
- Skew normal distribution
- Student's t-distribution
- Tweedie distributions
Notes
- ↑ Havil, 2003
- ↑ The Q-function
- ↑ http://www.eng.tau.ac.il/~jo/academic/Q.pdf
- ↑ Normal Distribution Function - from Wolfram MathWorld
- ↑ Johnson NL, Kotz S, Balakrishnan N. (1995) Continuous Univariate Distributions Volume 2, Wiley. Equation(26.48)
References
- John Aldrich. Earliest Uses of Symbols in Probability and Statistics. Electronic document, retrieved March 20, 2005. (See "Symbols associated with the Normal Distribution".)
- Abraham de Moivre (1738). The Doctrine of Chances.
- Stephen Jay Gould (1981). The Mismeasure of Man. First edition. W. W. Norton. ISBN 0-393-01489-4 .
- Havil, 2003. Gamma, Exploring Euler's Constant, Princeton, NJ: Princeton University Press, p. 157.
- R. J. Herrnstein and Charles Murray (1994). The Bell Curve: Intelligence and Class Structure in American Life. Free Press. ISBN 0-02-914673-9 .
- Pierre-Simon Laplace (1812). Analytical Theory of Probabilities.
- Jeff Miller, John Aldrich, et al. Earliest Known Uses of Some of the Words of Mathematics. In particular, the entries for "bell-shaped and bell curve", "normal" (distribution), "Gaussian", and "Error, law of error, theory of errors, etc.". Electronic documents, retrieved December 13, 2005.
- S. M. Stigler (1999). Statistics on the Table, chapter 22. Harvard University Press. (History of the term "normal distribution".)
- Eric W. Weisstein et al. Normal Distribution at MathWorld. Electronic document, retrieved March 20, 2005.
- Marvin Zelen and Norman C. Severo (1964). Probability Functions. Chapter 26 of Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, ed, by Milton Abramowitz and Irene A. Stegun. National Bureau of Standards.
External links
The Normal distribution
- Mathworld: Normal Distribution
- GNU Scientific Library – Reference Manual – The Gaussian Distribution
- PlanetMath: normal random variable
- Intuitive derivation.
- Is normal distribution due to Karl Gauss? Euler, his family of gamma functions, and place in history of statistics
- Maxwell demons: Simulating probability distributions with functions of propositional calculus
Online results and applications
- Normal distribution table
- Public Domain Normal Distribution Table
- Distribution Calculator – Calculates probabilities and critical values for normal, t, chi-square and F-distribution.
- Java Applet on Normal Distributions
- Interactive Distribution Modeler (incl. Normal Distribution).
- Free Area Under the Normal Curve Calculator from Daniel Soper's Free Statistics Calculators website.
- Interactive Graph of the Standard Normal Curve Quickly Visualize the one and two-tailed area of the Standard Normal Curve
Algorithms and approximations
- Calculating the Cumulative Normal distribution, C++, VBA, sitmo.com
- An algorithm for computing the inverse normal cumulative distribution function by Peter J. Acklam – has examples for several programming languages
- An Approximation to the Inverse Normal(0, 1) Distribution, gatech.edu
- Handbook of Mathematical Functions: Polynomial and Rational Approximations for P(x) and Z(x), Abramowitz and Stegun
Template:ProbDistributions Template:Statistics
ar:توزيع احتمالي طبيعي az:Normal paylanma ca:Distribució normal cs:Normální rozdělení cy:Dosraniad normal da:Normalfordeling de:Normalverteilung eo:Normala distribuo fa:توزیع نرمال gl:Distribución normal ko:정규 분포 hr:Normalna raspodjela id:Distribusi normal is:Normaldreifing it:Variabile casuale normale he:התפלגות נורמלית la:Distributio normalis lv:Normālsadalījums lt:Normalusis skirstinys hu:Normális eloszlás nl:Normale verdeling no:Normalfordeling simple:Normal distribution sl:Normalna porazdelitev sr:Нормална расподела su:Sebaran normal fi:Normaalijakauma sv:Normalfördelning uk:Нормальний розподіл ur:معمول توزیع