Post on 26-Dec-2015
Basics of parametric statistics ANOVA – Analysis of Variance T-Test and ANOVA in SPSS
Lunch
T-test in SPSS ANOVA in SPSS
The arithmetic mean can only be derived from interval or ratio measurements.
Interval data – equal intervals on a scale; intervals between different points on a scale represent the difference between all points on the scale.
Ratio Data – has the same property as interval data, however the ratios must make mutually sense. Example 40 degrees is not twice as hot as 20 degrees; reason the celsius scale does not have an absolute zero.
Assumption 1:Homogeneity of variance – means should be equally accurate.
Assumption 2:In repeated measure designs: Sphericity assumption.
Assumption 3: Normal Distribution
Assumption 1:Homogeneity of variance
The spread of scores in each sample should be roughly similar
Tested using Levene´s test
Assumption 2:The sphericity assumption
Tested using Mauchly´s test Basically the same thing: homogeneity of
variance
Assumption 3: Normal Distribution. In SPSS this can be checked by using:
▪ Kolmogorov-Smirnov test▪ Shapiro-Wilkes test
These compare a sample set of scores to a normally distributed set of scores with the same mean and standard deviation.
If (p> 0.05) The distribution is not significantly different from a normal distribution
If (p< 0.05) The distribution is significantly different from a normal distribution
Difference between t-test and ANOVA: t-test is used to analyze the difference
between TWO levels of an independent variable.
ANOVA is used to analyze the difference between MULTIPLE levels of an independent variable.
Independent variable = apple
Dependent variables could be: sweetness, decay time etc.
t-test ANOVA
…or more
The ANOVA tests for an overall effect, not the specific differences between groups.
To find the specific differences use either planned comparisons or post hoc test. Planned comparisons are used when a
preceding assumptions about the results exists. Post Hoc analysis is done subsequent to data
collection and inspection.
A Post Hoc Analysis is somewhat the same as doing a lot of t-tests with a low significance cut-of point, the Type I error is controlled at 5%. Type I error: Fisher’s criterion states that there is a
o.o5 probability that any significance is due to diversity in samples rather than the experimental manipulation – the α-level.
Using a Bonferroni correction adjusts the α-level according to number of tests done (2 test = o.5/2 = 0.025. 5 test= 0.5/5= 0.01). Basically the more tests you do the lower the cut of point.
Variation in a set of scores comes from two sources:
Random variation from the subjects themselves (due to individual variations in motivation, aptitude, etc.)
Systematic variation produced by the experimental manipulation.
ANOVA compares the amount of systematic variation to the amount of random variation, to produce an F-ratio:
systematic variation
random variation (‘error’)F =
Large value of F: a lot of the overall variation in scores is due to the experimental manipulation, rather than to random variation between subjects.
Small value of F: the variation in scores produced by the experimental manipulation is small, compared to random variation between subjects.
In practice, ANOVA is based on the variance of the scores. The variance is the standard deviation squared:
variance
(X X ) 2
N
We want to take into account the number of subjects and number of groups. Therefore, we use only the top line of the variance formula (the "Sum of Squares", or "SS"):
We divide this by the appropriate "degrees of freedom" (usually the number of groups or subjects minus 1).
sum of squares
(X X ) 2
Between groups SSM: a measure of the amount of variation between the groups. (This is due to our experimental manipulation).
Within GroupsR: a measure of the amount of variation within the groups. (This cannot be due to our experimental manipulation, because we did the
same thing to everyone within each group).
Total sum of squares:a measure of the total amount of variation amongst all the scores. (Total SS) = (Between-groups SS) + (Within-groups SS)
The bigger the F-ratio, the less likely it is to have arisen merely by chance.
Use the between-groups and within-groups d.f. to find the critical value of F.
Your F is significant if it is equal to or larger than the critical value in the table.
1 2 3 4
1 161.4 199.5 215.7 224.6
2 18.51 19.00 19.16 19.25
3 10.13 9.55 9.28 9.12
4 7.71 6.94 6.59 6.39
5 6.61 5.79 5.41 5.19
6 5.99 5.14 4.76 4.53
7 5.59 4.74 4.35 4.12
8 5.32 4.46 4.07 3.84
9 5.12 4.26 3.86 3.63
10 4.96 4.10 3.71 3.48
11 4.84 3.98 3.59 3.36
12 4.75 3.89 3.49 3.26
13 4.67 3.81 3.41 3.18
14 4.60 3.74 3.34 3.11
15 4.54 3.68 3.29 3.06
16 4.49 3.63 3.24 3.01
17 4.45 3.20 3.20 2.96
Here, look up the critical F-value for 3 and 16 d.f.
Columns correspond to between-groups d.f.; rows correspond to within-groups d.f.
Here, go along 3 and down 16: critical F is at the intersection.
Our obtained F, 25.13, is bigger than 3.24; it is therefore significant at p<.05. (Actually it’s bigger than 9.01, the critical value for a p of 0.001).
One –Way ANOVA Independent Repeated Measures
Two-way ANOVA Independent Mixed Repeated Measures
N-way ANOVA
One-Way: ONE INDEPENDENT VARIABLE
Independent: 1 participant = 1 piece of data.
Independent variable: Yoga Pose,3 levels
Dependent variables: Heart rate, oxygen saturation
One-Way: ONE INDEPENDENT VARIABLE
Dependent : 1 participant = Multiple pieces of data.
Independent variable:Cake,3 levels
Dependent variables: Blood sugar, pH-balance
Two-Way : TWO INDEPENDENT VARIABLES
Independent : 1 participant = 1 piece of data.
Independent variables: Age, Music Style
>40 <40
Indie-Rock Classic Pop
Two-Way : TWO INDEPENDENT VARIABLES
Mixed: Variable 1: Independent (Controller) Variable 2: Repeated measures (Space Ship)
Two-Way: TWO INDEPENDENT VARIABLES
Dependent : 1 participant = Multiple pieces of data.
Independent variables: Exercise, Temperature
20 ° 25 ° 30 °
Running SPSS (repeated measures t-test)
Running SPSS (repeated measures t-test)
Running SPSS (repeated measures t-test)
Interpreting SPSS output (repeated measures t-test)
RUNNING SPSS
Click ‘Options…’Then Click Boxes: Descriptive; Homogeneity of variance test; Means plot
SPSS output
One-way independent-measures ANOVA enables comparisons between 3 or more groups that represent different levels of one independent variable.
A parametric test, so the data must be interval or ratio scores; be normally distributed; and show homogeneity of variance.
ANOVA avoids increasing the risk of a Type 1 error.