Misplaced Pages

Normality test: Difference between revisions

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Browse history interactively← Previous editNext edit →Content deleted Content addedVisualWikitext
Revision as of 21:32, 17 June 2009 edit202.36.134.22 (talk) Frequentist tests← Previous edit Revision as of 22:36, 15 May 2010 edit undoCitation bot 1 (talk | contribs)Bots130,044 editsm Citation maintenance. Added: url, issue, publisher. You can use this bot yourself! Report bugs here.Next edit →
Line 48: Line 48:
|volume = 17 |volume = 17
|number = 1 |number = 1
|url=http://jstor.org/stable/1268008
|issue=1
|publisher=American Society for Quality
}} }}
*Gujarati, Damodar N., ''Basic Econometrics, Fourth Edition'', 2003; 147–148 *Gujarati, Damodar N., ''Basic Econometrics, Fourth Edition'', 2003; 147–148

Revision as of 22:36, 15 May 2010

In statistics, normality tests are used to determine whether a data set is well-modeled by a normal distribution or not, or to compute how likely an underlying random variable is to be normally distributed.

More precisely, they are a form of model selection, and can be interpreted several ways, depending on one's interpretations of probability:

  • In descriptive statistics terms, one measures a goodness of fit of a normal model to the data – if the fit is poor then the data is not well modeled in that respect by a normal distribution, without making a judgment on any underlying variable.
  • In frequentist statistics statistical hypothesis testing, one tests the data against the null hypothesis that it is normally distributed.
  • In Bayesian statistics, one does not "test normality" per se, but rather computes the likelihood that the data comes from a normal distribution with given parameters μ,σ (for all μ,σ), and compares that with the likelihood that the data comes from other distributions under consideration, most simply using Bayes factors (giving the relatively likelihood of seeing the data given different models), or more finely taking a prior distribution on possible models and parameters and computing a posterior distribution given the computed likelihoods.

Graphical methods

An informal approach to testing normality is to compare a histogram of the residuals to a normal probability curve. The actual distribution of the residuals (the histogram) should be bell-shaped and resemble the normal distribution. This might be difficult to see if the sample is small. In this case one might proceed by regressing the measured residuals against a normal distribution with the same mean and variance as the sample. If the regression produces an approximately straight line, then the residuals can safely be assumed to be normally distributed.

A more formal graphical tool is the normal probability plot, a quantile-quantile plot against the standard normal distribution. Here the correlation coefficient of the data (the goodness of fit of the best fit line) gives a measure of how well the data is modeled by a normal distribution. These also have the benefit that outliers stick out, and that they can be used for communication with non-statisticians more easily than numbers.

Back of the envelope test

A simple back of the envelope test takes the sample maximum and minimum and computes their z-score, or more properly t-statistic (number of sample standard deviations that a sample is above or below the sample mean), and compares it to the 68–95–99.7 rule: if one has a 3σ event (properly, a 3s event) and significantly fewer than 300 samples, or a 4s event and significantly fewer than 15,000 samples, then a normal distribution significantly understates the maximum magnitude of deviations in the sample data.

This test is useful in cases where one faces kurtosis risk – where large deviations matter – and has the benefits that it is very easy to compute and to communicate: non-statisticians can easily grasp that "6σ events don’t happen in normal distributions".

Frequentist tests

Normality tests include D'Agostino's K-squared test, the Jarque–Bera test, the Anderson–Darling test, the Cramér–von-Mises criterion, the Lilliefors test for normality (itself an adaptation of the Kolmogorov–Smirnov test), the Shapiro–Wilk test, the Pearson's chi-square test, and the Shapiro–Francia test for normality.

Historically, the third and fourth standardized moments (skewness and kurtosis) were some of the earliest tests for normality; other early test statistics include the ratio of the mean absolute deviation to the standard deviation and of the range to the standard deviation.

Bayesian tests

Kullback–Leibler distances between the whole posterior distributions of the slope and variance do not indicate non-normality. However, the ratio of expectations of these posteriors and the expectation of the ratios give similar results to the Shapiro–Wilk statistic except for very small samples, when non-informative priors are used.

Spiegelhalter suggests using Bayes factors to compare normality with a different class of distributional alternatives. This approach has been extended by Farrell and Rogers-Stewart.

Applications

One application of normality tests is to the residuals from a linear regression model. If they are not normally distributed, the residuals should not be used in Z tests or in any other tests derived from the normal distribution, such as t tests, F tests and chi-square tests. If the residuals are not normally distributed, then the dependent variable or at least one explanatory variable may have the wrong functional form, or important variables may be missing, etc. Correcting one or more of these systematic errors may produce residuals that are normally distributed.

Notes

  1. Judge et al. (1988) and Gujarati (2003) recommend the Jarque–Bera test.
  2. (Filliben 1975)
  3. Young K. D. S. (1993), Bayesian diagnostics for checking assumptions of normality. Journal of statistical computation and simulation, vol. 47, no 3–4, pp. 167–180
  4. Spiegelhalter, D.J. (1980). An omnibus test for normality for small samples. Biometrika, 67, 493–496. doi:10.1093/biomet/67.2.493
  5. Farrell, P.J., Rogers-Stewart, K. (2006) Comprehensive study of tests for normality and symmetry: extending the Spiegelhalter test. Journal of Statistical Computation and Simulation, 76(9), 803 – 816. doi:10.1080/10629360500109023

References

External links

Categories:
Normality test: Difference between revisions Add topic