Statistics (cont.)

30
Statistics (cont.) Psych 231: Research Methods in Psychology

description

Statistics (cont.). Psych 231: Research Methods in Psychology. Journal Summary 2 assignment Due today Group projects Plan to have your analyses done before Thanksgiving break, GAs will be available during lab times to help - PowerPoint PPT Presentation

Transcript of Statistics (cont.)

Page 1: Statistics  (cont.)

Statistics (cont.)

Psych 231: Research Methods in Psychology

Page 2: Statistics  (cont.)

Announcements

Journal Summary 2 assignment Due today

Group projects Plan to have your analyses done before Thanksgiving break,

GAs will be available during lab times to help Poster sessions are last lab sections of the semester (last

week of classes), so start thinking about your posters. I will lecture about poster presentations on Monday.

Page 3: Statistics  (cont.)

Testing Hypotheses

Step 2: Set your decision criteria Step 3: Collect your data from your sample(s) Step 4: Compute your test statistics Step 5: Make a decision about your null hypothesis

“Reject H0”

“Fail to reject H0”

Step 1: State your hypotheses

Page 4: Statistics  (cont.)

Summary to this point

Real world (‘truth’)

H0 is correct

H0 is wrong

Experimenter’s conclusions

Reject H0

Fail to Reject H0

Type I error

Type II errorXAXB

76% 80%

Is the 4% difference a “real” difference (statistically significant) or is it just sampling error?

Two sampledistributions

H0: there is no difference between Grp A and Grp B

H0: μA = μB

About populations Example Experiment: Group A - gets treatment to improve memory Group B - gets no treatment (control)

After treatment period test both groups for memory Results:

Group A’s average memory score is 80% Group B’s is 76%

HA: there is a difference between Grp A and Grp B

Page 5: Statistics  (cont.)

“Generic” statistical test

Tests the question: Are there differences between groups

due to a treatment?

One population

Real world (‘truth’)

H0 is correct

H0 is wrong

Experimenter’s conclusions

Reject H0

Fail to Reject H0

Type I error

Type II error

H0 is true (no treatment effect)

XAXB

76% 80%

Two possibilities in the “real world”

Two sampledistributions

Page 6: Statistics  (cont.)

“Generic” statistical test

XAXB XAXB

H0 is true (no treatment effect) H0 is false (is a treatment effect)

Real world (‘truth’)

H0 is correct

H0 is wrong

Experimenter’s conclusions

Reject H0

Fail to Reject H0

Type I error

Type II error

76% 80% 76% 80%

People who get the treatment change, they form a new population (the “treatment population)

People who get the treatment change, they form a new population (the “treatment population)

Tests the question: Are there differences between groups

due to a treatment?

Two possibilities in the “real world”

Two populations

Two sampledistributions

Page 7: Statistics  (cont.)

“Generic” statistical test

XBXA

Why might the samples be different?(What is the source of the variability between groups)?

ER: Random sampling error ID: Individual differences (if between subjects factor) TR: The effect of a treatment

Page 8: Statistics  (cont.)

“Generic” statistical test

The generic test statistic - is a ratio of sources of variability

Observed difference

Difference from chance=

TR + ID + ER

ID + ER=

Computed test statistic

XBXA

ER: Random sampling error ID: Individual differences (if between subjects factor) TR: The effect of a treatment

Page 9: Statistics  (cont.)

Difference from chance

n = 1

Population mean

x

Sampling error

(Pop mean - sample mean)

Population

Distribution

Page 10: Statistics  (cont.)

Difference from chance

n = 2

Population mean

x

Population

Distribution

x

Sampling error

(Pop mean - sample mean)

Sample mean

Page 11: Statistics  (cont.)

Difference from chance

n = 10

Population meanPopulation

Distribution

Sampling error

(Pop mean - sample mean)

Sample mean

xxx

x

x

xx

xx x

Generally, as the sample size increases, the sampling error decreases

Generally, as the sample size increases, the sampling error decreases

Page 12: Statistics  (cont.)

Difference from chance

Typically the narrower the population distribution, the narrower the range of possible samples, and the smaller the “chance”

Small population variability Large population variability

Page 13: Statistics  (cont.)

Difference from chance

These two factors combine to impact the distribution of sample means. The distribution of sample means is a distribution of all possible sample

means of a particular sample size that can be drawn from the population

XA XB XC XD

Population

Samples of size = n

Distribution of sample means

Avg. Sampling error

“chance”

Page 14: Statistics  (cont.)

“Generic” statistical test

The generic test statistic distribution To reject the H0, you want a computed test statistics that is large

• reflecting a large Treatment Effect (TR) What’s large enough? The alpha level gives us the decision criterion

Distribution of the test statistic

α-level determines where these boundaries go

Distribution of sample means

Test statistic

TR + ID + ER

ID + ER

Page 15: Statistics  (cont.)

“Generic” statistical test

Distribution of the test statistic Reject H0

Fail to reject H0

The generic test statistic distribution To reject the H0, you want a computed test statistics that is large

• reflecting a large Treatment Effect (TR) What’s large enough? The alpha level gives us the decision criterion

Page 16: Statistics  (cont.)

“Generic” statistical test

Distribution of the test statistic Reject H0

Fail to reject H0

The generic test statistic distribution To reject the H0, you want a computed test statistics that is large

• reflecting a large Treatment Effect (TR) What’s large enough? The alpha level gives us the decision criterion

“One tailed test”: sometimes you know to expect a particular difference (e.g., “improve memory performance”)

Page 17: Statistics  (cont.)

“Generic” statistical test

Things that affect the computed test statistic Size of the treatment effect

• The bigger the effect, the bigger the computed test statistic

Difference expected by chance (sample error)• Sample size• Variability in the population

Page 18: Statistics  (cont.)

Significance

“A statistically significant difference” means: the researcher is concluding that there is a difference above

and beyond chance with the probability of making a type I error at 5% (assuming an

alpha level = 0.05)

Note “statistical significance” is not the same thing as theoretical significance. Only means that there is a statistical difference Doesn’t mean that it is an important difference

Page 19: Statistics  (cont.)

Non-Significance

Failing to reject the null hypothesis Generally, not interested in “accepting the null hypothesis”

(remember we can’t prove things only disprove them) Usually check to see if you made a Type II error (failed to

detect a difference that is really there)• Check the statistical power of your test (1-β)

• Sample size is too small• Effects that you’re looking for are really small

• Check your controls, maybe too much variability

Page 20: Statistics  (cont.)

Some inferential statistical tests

1 factor with two groups T-tests

• Between groups: 2-independent samples

• Within groups: Repeated measures samples (matched, related)

1 factor with more than two groups Analysis of Variance (ANOVA) (either between groups or

repeated measures)

Multi-factorial Factorial ANOVA

Page 21: Statistics  (cont.)

T-test

Design 2 separate experimental conditions Degrees of freedom

• Based on the size of the sample and the kind of t-test

Formula:

T = X1 - X2

Diff by chance

Based on sample error

Observed difference

Computation differs for between and within t-tests

Page 22: Statistics  (cont.)

T-test

Reporting your results The observed difference between conditions Kind of t-test Computed T-statistic Degrees of freedom for the test The “p-value” of the test

“The mean of the treatment group was 12 points higher than the control group. An independent samples t-test yielded a significant difference, t(24) = 5.67, p < 0.05.”

“The mean score of the post-test was 12 points higher than the pre-test. A repeated measures t-test demonstrated that this difference was significant significant, t(12) = 5.67, p < 0.05.”

Page 23: Statistics  (cont.)

Analysis of Variance

Designs More than two groups

• 1 Factor ANOVA, Factorial ANOVA• Both Within and Between Groups Factors

Test statistic is an F-ratio Degrees of freedom

Several to keep track of The number of them depends on the design

XBXA XC

Page 24: Statistics  (cont.)

Analysis of Variance

More than two groups Now we can’t just compute a simple difference score since

there are more than one difference So we use variance instead of simply the difference

• Variance is essentially an average difference

Observed variance

Variance from chanceF-ratio =

XBXA XC

Page 25: Statistics  (cont.)

1 factor ANOVA

1 Factor, with more than two levels Now we can’t just compute a simple difference score since

there are more than one difference• A - B, B - C, & A - C

XBXA XC

Page 26: Statistics  (cont.)

1 factor ANOVA

Null hypothesis: H0: all the groups are equal

XA = XB = XC

Alternative hypotheses

HA: not all the groups are equal

XA ≠ XB ≠ XC XA ≠ XB = XC

XA = XB ≠ XC XA = XC ≠ XB

The ANOVA tests this one!!

Do further tests to pick between these

XBXA XC

Page 27: Statistics  (cont.)

1 factor ANOVA

Planned contrasts and post-hoc tests:

- Further tests used to rule out the different Alternative

hypothesesXA ≠ XB ≠ XC

XA ≠ XB = XC

XA = XB ≠ XC

XA = XC ≠ XB

Test 1: A ≠ B

Test 2: A ≠ C

Test 3: B = C

Page 28: Statistics  (cont.)

Reporting your results The observed differences Kind of test Computed F-ratio Degrees of freedom for the test The “p-value” of the test Any post-hoc or planned comparison results

“The mean score of Group A was 12, Group B was 25, and Group C was 27. A 1-way ANOVA was conducted and the results yielded a significant difference, F(2,25) = 5.67, p < 0.05. Post hoc tests revealed that the differences between groups A and B and A and C were statistically reliable (respectively t(1) = 5.67, p < 0.05 & t(1) = 6.02, p <0.05). Groups B and C did not differ significantly from one another”

1 factor ANOVA

Page 29: Statistics  (cont.)

Factorial ANOVAs

We covered much of this in our experimental design lecture More than one factor

Factors may be within or between Overall design may be entirely within, entirely between, or mixed

Many F-ratios may be computed An F-ratio is computed to test the main effect of each factor An F-ratio is computed to test each of the potential interactions

between the factors

Page 30: Statistics  (cont.)

Factorial ANOVAs

Reporting your results The observed differences

• Because there may be a lot of these, may present them in a table instead of directly in the text

Kind of design• e.g. “2 x 2 completely between factorial design”

Computed F-ratios• May see separate paragraphs for each factor, and for interactions

Degrees of freedom for the test• Each F-ratio will have its own set of df’s

The “p-value” of the test• May want to just say “all tests were tested with an alpha level of

0.05” Any post-hoc or planned comparison results

• Typically only the theoretically interesting comparisons are presented