Jump to content

Analysis of variance

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by TedE (talk | contribs) at 03:10, 14 May 2006 (Fisher consistency project). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In statistics, analysis of variance (ANOVA) is a collection of statistical models and their associated procedures which compare means by splitting the overall observed variance into different parts. The initial techniques of the analysis of variance were pioneered by the statistician and geneticist R.A. Fisher in the 1920s and 1930s, and is sometimes known as Fisher's ANOVA or Fisher's analysis of variance.

Overview

There are three conceptual classes of such models:

  • Fixed-effects model assumes that the data come from normal populations which differ in their means.
  • Random-effects models assume that the data describe a hierarchy of different populations whose differences are constrained by the hierarchy.
  • Mixed models describe situations where both fixed and random effects are present.

In practice, there are several types of ANOVA depending on the number of treatments and the way they are applied to the subjects in the experiment:

  • One-Way ANOVA is used to test for differences among three or more independent groups.
  • One-Way ANOVA for repeated measures is used when the subjects are dependent groups; this means that the same subjects are used for each treatment. Note that this method is subject to carryover effects.
  • 2x2 (read: two by two) ANOVA, the most common type of Factorial Analysis of Variance, is used when the experimenter wants to study the effects of two or more treatment variables. Factorial ANOVA can also be 2x2x2, 3x3, etc. but higher numbers of factors is rarely done because the calculations are lengthy and the results are hard to interpret.

Example of One-Way ANOVA: Group A is given Vodka, Group B is given Gin, and Group C is given a placebo. All groups are then tested with a memory task.

Example of One-Way ANOVA with repeated measures: Group A is given Alcohol and tested on a memory task. The same group is allowed a rest period of five days and then the experiment is repeated with Gin. Again, the procedure is repeated using a placebo.

Example of Factorial ANOVA (2x2): In an experiment testing the effects of expectation of vodka and the actual receiving of vodka, subjects are randomly assigned to four groups: 1) expect vodka-receive vodka, 2) expect vodka-receive placebo, 3) expect placebo-receive vodka, and 4) expect placebo-receive placebo (the last group is used as the control group). Each group is then tested on a memory task. The advantage of this design is that multiple variable can be tested at the same time instead of running two different experiments. Also, the experiment can determine whether one variable affects the other variable (known as interaction effects).

Logic of ANOVA

The fundamental technique is a partitioning of the total sum of squares into components related to the effects in the model used. For example, we show the model for a simplified ANOVA with one type of treatment at different levels. (If the treatment levels are quantitative and the effects are linear, a linear regression analysis may be appropriate.)

The number of degrees of freedom (abbreviated df) can be partitioned in a similar way and specifies the chi-square distribution which describes the associated sums of squares.

Fixed-effects model

The fixed-effects model of analysis of variance applies to situations in which the experimenter has subjected his experimental material to several treatments, each of which affects only the mean of the underlying normal distribution of the "response variable".

Random-effects model

Random effects models are used to describe situations in which incomparable differences in experimental material occur. The simplest example is that of estimating the unknown mean of a population whose individuals differ from each other. In this case, the variation between individuals is confounded with that of the observing instrument.

Degrees of freedom

Degrees of freedom indicates the effective number of observations which contribute to the sum of squares in an ANOVA, the total number of observations minus the number of linear constraints in the data.

Tests of significance

Analyses of variance lead to tests of statistical significance using Fisher's F-distribution.

See also

Additional references

  • King, Bruce M., Minium, Edward W. (2003). Statistical Reasoning in Psychology and Education, Fourth Edition. Hoboken, New Jersey: John Wiley & Sons, Inc. ISBN 0471211877