1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

87
1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7

description

3 Simple Linear Regression Model We take n pairs of observations The goal is to find a model that best fits with the data. Model will be linear in terms of the parameters (betas). These won’t appear in exponents or anything unusual. Allowed to be nonlinear in terms of predictor variables (we may transform these somewhat freely). We may also transform the response.

Transcript of 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

Page 1: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

1

Topic 2 – Simple Linear Regression

KKNR Chapters 4 – 7

Page 2: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

2

Overview

Regression Models; Scatter Plots SAS GPLOT Procedure

Estimation and Inference in SLR SAS REG Procedure

ANOVA Table & Coefficient of Determination (R2)

Page 3: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

3

Simple Linear Regression Model

We take n pairs of observations

The goal is to find a model that best fits with the data.

Model will be linear in terms of the parameters (betas). These won’t appear in exponents or anything unusual.

Allowed to be nonlinear in terms of predictor variables (we may transform these somewhat freely). We may also transform the response.

( ) ( ) ( )1 1 2 2, , , ,..., ,n nX Y X Y X Y

Page 4: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

4

Simple Linear Regression Model (2)

Some sample models

Notice the betas always function in the same way, and the analysis will always proceed in the same way too (after we make whatever transformations we might need).

0 1

0 1

0 1

log

log

i i i

i i i

i i i

Y X

Y X

Y X

Page 5: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

5

Simple Linear Regression Model (3)

Key question: How do you decide on the “best” form for the model?

Always view a scatter plot (use PROC GPLOT in SAS). Curvature in the plot will help you determine the need for a transformation on either X or Y.

Always consider residual plots. Some patterns in these plots will also indicate the need for transformation (more on this later).

Page 6: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

6

Scatter Plot Approach

If you can look at a scatter plot and the data “look linear”, then likely no transformation is necessary. Try not to look for things that are not there.

If you see curvature, then some transformation may be appropriate:

Use scientific theory & experience Try transformations you think may work – look

at scatter plots of the transformed data to assess whether they do work.

Page 7: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

7

Finding the “Best” Model

There is no “absolute” strategy. Some common mistakes (why are these bad?):

Try several different methods and simply take the one for which you get the best results (e.g. highest R2)

Over-fit the model by including lots of extra terms (e.g. squares, cubes, etc.) in hopes to get the curve to go through all of the data points (note that this would be MLR)

Page 8: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

8

Collaborative Learning Activity

CLG #2.1-2.3First, make sure you read enough to understand the dataset we will be considering. Then, please try to answer these questions related to scatter plots.

Page 9: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

9

Scatter Plot Examples (1)

Wi t h Est i mat ed Regr ess i on Li ne

800

900

1000

1100

1200

St at ewi de Expendi t ur es

3 4 5 6 7 8 9 10

Page 10: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

10

Scatter Plot Examples (2)

Wi t h Es t i mat ed Regr ess i on Li ne

800

900

1000

1100

1200

Per cent age of El i gi bl e St udent s Taki ng SAT

0 10 20 30 40 50 60 70 80 90

Page 11: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

11

Scatter Plot Examples (3)Wi t h Nonpar amet er i c Smoot h

800

900

1000

1100

1200

Per cent age of El i gi bl e St udent s Taki ng SAT

0 10 20 30 40 50 60 70 80 90

Page 12: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

12

Scatter Plot Examples (4)Log Tr ans f or med Pr edi ct or Wi t h Nonpar amet er i c Smoot h

800

900

1000

1100

1200

Log- Tr ans f or med Per cent age of El i gi bl e St udent s Taki ng SAT

1 2 3 4 5

Page 13: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

13

Comments on GPLOT

Utilize SYMBOL, AXIS, and TITLE statements to make your plots look nice.

ORFONT provides a good symbol-set. You can also manipulate the COLOR of symbols in order help the viewer differentiate groups.

Be careful to remember that SAS reuses these statements, so you will need to redefine them as necessary.

Page 14: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

14

SAS ORFONT

Page 15: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

15

Fitting the SLR Model

• Once we decide on the form of our model, we need to estimate the parameters that yield the “best” fit.

• The arithmetic involved is accomplished with a computer, but it is useful to have some understanding of the how the estimates are calculated.

Page 16: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

16

The SLR Model

Whatever the transformations may be, our model is in the form of a straight line:

Epsilon represents the inherent variation (or error) in the model.

Model involves two other parameters (unknown, but fixed in value): slope (change in y for a one unit change in x) intercept (value of y for x = 0; usually not particularly

interested in this)

0 1Y X

Page 17: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

17

Observations

An observation Y at a particular X is a random variable. So you can think of each observation as having been drawn from a normal distribution centered at and having standard deviation .

Be careful to remember that these parameters (represented by greek letters) are fixed – but can never be known exactly. We can only estimate them.

0 1X

Page 18: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

18

Graphical Representation

Page 19: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

19

Model Assumptions

We make three assumptions on the error term in our model. A simple statement of these assumptions is that

The assumptions on the errors apply to both regression and ANOVA and we will be assuming these throughout the course.

For regression, we also make a 4th assumption that our model (in this case linear relationship between X and Y) is appropriate.

2~ 0,IIDi N

Page 20: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

20

Assumptions on Errors

Constance Variance (Homoscedasticity) – the variance associated to the error is the same for ANY value of X.

Normality – the errors follow a normal distribution with a mean of zero.

Independence – the errors (and hence also the responses) are statistically independent of each other.

Page 21: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

21

Checking Your Assumptions

Reminder: We can never know the exact values of the errors because we can never know the true regression equation.

We can (and will) estimate the errors by the residuals. The residuals can then be used (mostly in graphical analyses) to assess the assumptions – giving us some idea of whether the assumptions of our model are satisfied. More on this later...

Page 22: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

22

Estimation of the “Best” Line

We want to obtain estimates of the parameters and (remember, we can never know them exactly).

Notation: Generally, I will use lower case English letters to represent estimates for parameters. You may also see hat-notation. For example, if is a parameter, would be its estimated value from data.

Our estimates will be denoted . The residuals will be denoted by .

0 1

0 1,b bie

Page 23: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

23

Parameter Estimates

Key Point: Parameters are fixed, but their estimates are random variables. If we take a different sample, we’ll get a different estimate.

Thus all of the estimates we compute will have associated standard errors that we may also estimate.

The method of least squares is used to obtain both parameter estimates and standard errors. This method is desirable because the estimates are unbiased, minimum variance estimates.

0 1, , ib b e

Page 24: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

24

Least-squares Method

The least squares method obtains the estimated regression line that minimizes the sum of the squared residuals (also called the SSE or sum of squares error).

Another way to think of this is that the least squares estimates allow us to explain as much of the variation in Y as we possibly can using X as a predictor. The SSR (sums of squares due to our model) is maximized.

22 ˆi i iSSE e Y Y

Page 25: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

25

Least Squares Estimates

The estimates have formulas in terms of the data:

11

2

1

0 1

0 1

2 2

ˆ

ˆ 2

n

i ii

n

ii

i i i i i

X X Y Yb

X X

b Y b X

e Y Y Y b b X

s SSE n

Page 26: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

26

Least Squares Estimates (2)

It is not important to memorize these formulas. I won’t ask you to calculate a parameter estimate by hand from the data. We have computers for this.

What is important will be to understand that, because the Yi are random variables, and because all of these estimates depend on the Yi, the estimates themselves will also be random variables. Thus we may estimate their standard errors, develop confidence intervals, and draw statistical inferences.

Page 27: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

27

Inference about the Slope

• A non-zero slope implies a linear association between the predictor and response.

• In some experimental cases the relationship may be causal as well.

• Thus statistical inference for the slope is quite important.

Page 28: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

28

Inference About the Slope

The first thing to remember is that b1 is a random variable. In order to do inference, we must first consider that...

is normally distributed (why?)

The standard error associated to b1 is

2

1

1n

ii

MSE

X XMSEs bSSX

1

12

1

n

i ii

n

ii

X X Y Yb

X X

Page 29: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

29

Slope Inference (2)

For testing , the statistic

has a t-distribution with n – 2 degrees of freedom when the null hypothesis is true.

A two-sided confidence interval for the slope will be:

0 1:H k

1

1

b kTs b

1 2,1 2 1nb t s b

Page 30: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

30

Slope Inference (3) Want to determine: Does X help explain Y

through a linear model???? If we reject the null hypothesis ,

then we may conclude that there is a linear association between X and Y Must have assumptions satisfied.

Key point: Failing to reject does not necessarily allow us to conclude that X is unimportant Maybe we need a bigger sample to give better power

0 1: 0H

Page 31: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

31

Slope Inference (4)

Another Key Point: Violations of the model assumptions may invalidate the significance test.

In particular, if there is a nonlinear association or some type of dependence issue, the SLR model should not be used.

See pages 65 for some pictures illustrating this.

Page 32: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

32

Experimental Control

In some situations you have experimental control over your predictor variable. Thus you have some control over the SE for the slope:

Making SSX large will decrease the SE of your estimate for the slope. Do this by spreading you chosen X values further apart.

Increasing n (and hence increasing degrees of freedom) may also help to decrease the SE.

1MSEs bSSX

Page 33: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

33

Inference About the Intercept

Hypothesis tests and confidence intervals may be constructed similar to inference for the slope. See page 63.

Key point: Unless the observed predictor is often in the neighborhood of zero, we have no reason to be interested in the intercept.

In fact, the intercept will usually just be an artifact of the model. And if the scope of the model does not include zero, there is no reason to even worry whether the value of b0 makes sense.

Page 34: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

34

Further Inference

Confidence Intervals for the Mean Response Prediction Intervals

Page 35: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

35

The Predicted Value

The line describes the mean population response for each value of the predictor. If an association exists, then the mean response depends on the value of X.

The predicted value of Y at a given X = x0 is

Reminder: Notation may differ some from the text – I try to keep our notation as simple as possible.

0 0 1 0xY b b x

Page 36: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

36

C.I. for the Mean Response or Prediction Interval?

If you are trying to predict for a group C.I. for the Mean Response

Example: Trying to predict the average blood pressure for all 40 year olds

Interval is usually narrower

If you are trying to predict for a single observation Prediction Interval

Example: Trying to predict the blood pressure for a single 40 year old

Interval is usually much wider because of individual variation Some 40 year olds will have much higher or lower B.P.s

Page 37: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

37

*Calculations of SE for Mean Response

First step is to write in terms of estimates. We also need to use a small trick to avoid worrying about a covariance between the two parameter estimates.

0 0 1 0

1 1 0

0 1

xVar Y Var b b x

Var Y b x b x

Var Y x x b

Page 38: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

38

*Calculations of SE for Mean Response (2)

We know the variances for Y-bar and b1. And it turns out that, even though Y-bar is used in the calculation of b1, the two are still independent. So the variance of the sum is the sum of the variances:

0

20 1

2 220

202

ˆ

1

xVar Y Var Y x x Var b

x xn SSX

x xn SSX

Page 39: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

39

SE for the Mean Response

The mean response is a random variable since b0 and b1 are random. Hence it will have a standard error.

It is good to have some understanding of how this works, so we will look very briefly at the calculations (you should just try to follow them, but not worry about memorizing them)

0

201

x

x xs Y MSE

n SSX

Page 40: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

40

Confidence Intervals for Mean Response

You sometimes want to get confidence intervals for the mean response. Because we have estimated both the mean and variance, the t-distribution applies (n – 2 degrees of freedom). The CI for a given value of X is:

0 02,1 / 2ˆ ˆx n xY t s Y

Page 41: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

41

SE for the Prediction Interval

So the prediction variance is the sum of these two components:

0 0

0

2

2

0 2 2

2

0

ˆ

1

11

x x

x

Var pred Var Y

X Xn SSX

X Xs pred MSE

n SSX

Page 42: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

42

Prediction Intervals (1)

Now consider predicting a new observation at X = x0. Our point estimate for this would just be the point on the regression line, .

Our prediction interval will be of the same form as for the mean response, but with a different standard error:

0xY

0 02,1 / 2x n xY t s pred

Page 43: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

43

Why the difference for the Prediction Interval?

The key to understanding the standard error for prediction is to understand the random components involved. (1) The regular variance associated to getting a

predicted value (same as the CI mean response error)

(2) The individual error for a single observation It basically gives us an extra σ2 piece Think of the NORMAL DISTRIBUTION centered

around the regression line (see slide 18)

Page 44: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

44

Multiple Confidence Intervals

We did intervals for 40 year olds, both group and individual, what about 30? 35?

Getting multiple CI’s presents a similar problem to multiple hypothesis tests. We would expect one errant CI for every 20 CI’s that we obtain at 95% confidence. Thus some adjustment may need to be made.

Bonferroni is too conservative here, because these CI’s are actually dependent and it is possible to take advantage of this.

Page 45: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

45

Confidence Bands The solution is to change our critical value. Instead of

using T, we use a critical value related to the F distribution:

This allows us to produce CI’s for the mean response at any and all possible values for the predictor variable. Hence we may also use this to draw confidence bands around the regression line.

Useful trick: For significance level 0.05, the value of W is approximately 0.6 more than the value of T. This is slightly conservative, but simplifies computation.

2, 2,12 nW F

Page 46: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

46

Interpolation vs. Extrapolation

Interpolation (x0 within the domain of the observed X’s) is generally ok if the assumptions are satisfied.

Extrapolation (x0 outside the domain of the observed X’s) is usually a bad idea.

No assurance that linearity continues outside the observed domain.

Example – Height regressed on age in children.

Page 47: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

47

Key Concepts

The standard error formulas for the slope, the regression line, and prediction are related – your goal should be to understand these relationships.

You should also be able to construct CI’s and do hypothesis tests.

Point Estimates Critical Values (know how to look these up) Standard Errors (generally would not be

asked to compute these, just use them)

Page 48: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

48

SAS Review

proc reg data=sat; model score=expend / clb clm cli; id state expend; output out=fit r=res p=pred;

Page 49: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

49

Collaborative Learning Activity

Please complete problems 2.4 (constructing CI’s) and 2.5 (interpreting regression output) on the handout.

Page 50: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

50

Output: ANOVA Table

Sum of Mean Source DF Squares Square F Value Pr > F Model 1 39722 39722 8.13 0.0064 Error 48 234586 4887.2 Total 49 274308 Root MSE 69.90851 R-Square 0.1448 Dependent Mean 965.92000 Adj R-Sq 0.1270 Coeff Var 7.23751

Page 51: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

51

Output: Parameter Estimates Parameter Estimates Parameter Standard Variable DF Estimate Error t Value Pr > |t| Intercept 1 1089.29372 44.38995 24.54 <.0001 expend 1 -20.89217 7.32821 -2.85 0.0064 Variable DF 95% Confidence Limits Intercept 1 1000.04174 1178.54569 expend 1 -35.62652 -6.15782

Page 52: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

52

Output: Output Statistics Output Statistics Dependent Predicted Std Error Obs state expend Variable Value Mean Predict 95% CL Mean 9 Louisian 4.761 1021 989.8 12.9637 963.8 1016 10 Minnesot 6 1085 963.9 9.9109 944.0 983.9 11 Missouri 5.383 1045 976.8 10.6015 955.5 998.1 12 Nebraska 5.935 1050 965.3 9.8890 945.4 985.2 Obs state expend 95% CL Predict Residual 9 Louisian 4.761 846.9 1133 31.1739 10 Minnesot 6 822.0 1106 121.0593 11 Missouri 5.383 834.7 1119 68.1689 12 Nebraska 5.935 823.3 1107 84.7013

Page 53: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

53

ANOVA Table

ANOVA stands for analysis of variance. We use an ANOVA table in regression to organize our estimates of different components of variation. It is important to understand how this works for SLR since we will use ANOVA tables for MLR and ANOVA procedures as well.

Page 54: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

54

ANOVA Table

Table consists of variance estimates used to assess the following two questions:

Is there an linear association between the response and predictor(s)?

How “strong” is that linear association?

We need to start by understanding the different components of variation for a single data point.

Page 55: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

55

Components of Variation

Page 56: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

56

Combining Over All Data

We might look at the total deviation as follows:

But we cannot simply add deviations across data points. Why? Options?

ˆ ˆi i i iY Y Y Y Y Y

Page 57: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

57

Combining Over All Data (2)

Squared deviations are chosen because it turns out that they can be used to estimate variances. It also (conveniently) turns out that:

2 22

1 1 1

ˆ ˆ

n n n

i i i ii i i

TOT R E

Y Y Y Y Y Y

SS SS SS

Page 58: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

58

Sums of Squares

The total sums of squares (SST) represents the total available variation that could be explained by the predictor (that not already explained by Ybar).

We break this into the two components: Model/Regression sums of squares (SSR) is the

part that is explained by the predictor. Error sums of squares (SSE) is the part that is

still left unexplained.

Page 59: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

59

Degrees of Freedom

Each SS has an associated degrees of freedom. For simple linear regression,

DFT = n – 1

DFR = 1

DFE = n – 2

Always have In general, you lose one degree of freedom for each

parameter you estimate. Since we estimate Ybar before we start, dfTOT is n – 1.

T R EDF DF DF

Page 60: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

60

Degrees of Freedom (2)

It is important to understand how DF are assigned with the models that we will be discussing. Some key principles:

DF Total is always 1 less than the number of observations.

You should next determine DF for the model. For regression, each continuous variable requires a slope estimate and takes 1 DF.

Lastly, the error DF is determined by subtraction (avoid memorization of formulas).

Page 61: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

61

Mean Squares

A mean square is a SS divided by its associated degrees of freedom.

These are the actual variance estimates:2

2

20 1

(population variance)1

(error variance)2

under null hyp. H : 01

Y

R

SSTs MSTnSSEs MSEnSSs MSR

Page 62: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

62

F-tests

Because both MSR and MSE estimate under the null hypothesis, we may utilize their ratio in order to test whether there is a linear association.

If the null hypothesis is true, the statistic

will have an F distribution with 1 and n – 2 DF. Note: To get your DF for the F-test, simply use

the DF for the associated mean squares.

2

MSRFMSE

Page 63: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

63

Relationship of F to T

For SLR the F test is identical to the t-test for the slope as in fact:

Additionally you will find that in terms of critical values, .

1

22

ˆ1F S T

21,v vF t

Page 64: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

64

Example Sum of Mean Source DF Squares Square F Value Pr > F Model 1 39722 39722 8.13 0.0064 Error 48 234586 4887.2 Total 49 274308 Root MSE 69.90851 R-Square 0.1448 Dependent Mean 965.92000 Adj R-Sq 0.1270 Coeff Var 7.23751 Parameter Estimates Parameter Standard Variable DF Estimate Error t Value Pr > |t| Intercept 1 1089.29372 44.38995 24.54 <.0001 expend 1 -20.89217 7.32821 -2.85 0.0064 Variable DF 95% Confidence Limits Intercept 1 1000.04174 1178.54569 expend 1 -35.62652 -6.15782

Page 65: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

65

Example (2)

For SLR, the F-test in ANOVA Table is exactly the same as the test for zero slope.

Note 8.13 = (-2.85)2.

Caution: When we get into multiple regression, if the F-test has a small p-value this is a good start, but not the end! In multiple regression, the F-test may be thought of as a test for “model significance”. But it doesn’t tell us which variable(s) are important and which are not.

Page 66: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

66

Other Statistics from REG

R-square and Adjusted R-square help us to assess the “strength” of the linear relationship.

The coefficient of variation is calculated using

It measures the variation as a percentage of the mean.

100CV MSE Y

Page 67: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

67

Coefficient of Determination

The coefficient of determination (R2) gives us some idea as to the strength of the regression relationship.

Page 68: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

68

Coefficient of Determination (R2)

Reflects the variation in Y that is explained by the regression relationship as a percentage of the total:

With perfect linear association, SSE will be zero and R2 will be 1.

If no linear association, SSE will be the same as SST and R2 will be 0.

2 1R E

TOT TOT

SS SSRSS SS

Page 69: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

69

Common Misconceptions

Steeper slope means bigger R2. This is not true. In fact R2 has nothing to do with the magnitude of the slope for our regression line.

The larger the value of R2, the better the model. This is also not true. R2 says nothing about appropriateness of model (see page 98).

R2 could be 0 but there could be a non-linear association between X & Y

R2 could be near 1 while a curvilinear model would be more appropriate (scatterplots will generally reveal this)

Page 70: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

70

The Correlation Coefficient (r)

Takes the sign of the slope and, for SLR, is simply the square root of R2.

Dimensionless – ranges between -1 and 1. Symmetric – interchanging X & Y will not

change the correlation between them.

Page 71: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.
Page 72: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

72

SAT Example: Interpret R2

Simple interpretation: 14.8% of the variation in SAT scores is explained by the expenditures.

Reality Check: (1) though significant, this is not a very strong relationship and (2) the slope parameter is negative, suggesting that increasing expenditures is associated with a decrease in the average score!

Page 73: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

73

Adjusted R2

Uses the mean squares to adjust (penalize) for the number of parameters in the model

We’ll discuss this more in multiple regression as it really isn’t important for SLR.

2 /1 1

/ 1E

aTOT

SSE n p MSRSST n MS

Page 74: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

74

Regression Diagnostics

Check Your Assumptions!!!

Page 75: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

75

Regression Diagnostics

Assumptions Correct Model (linearity) Independent Observations Normally Distributed Errors Constant Variance

Checking these generally involves PLOTS of the residuals and predicted values.

Residual = observed – predictedˆ

i i ie Y Y

Page 76: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

76

Regression Diagnostics (2)

Key Point: Most assumption checks may be done visually by looking at various plots. They may also be done using statistical tests. Looking at plots is generally easier!

So the general formula is to check the plots, and if you still have questions then perhaps consider the statistical tests.

Page 77: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

77

Checking Normality

Histogram or Box-plot of Residuals Is the histogram bell-shaped? Is the box-plot symmetric?

Normal Probability / QQ plot This is the method we would normally use.

Ordered residuals are plotted against cumulative normal probabilities and the result should be approximately linear.

PROC UNIVARIATE: QQPLOT statement

Shapiro-Wilks or Kolmogorov-Smirnov Test

Page 78: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

78

SAS Code: QQPlot

proc univariate data=fit noprint; var res; title 'Normal Probability Plot'; qqplot res / normal(l=1 mu=est sigma=est);

Page 79: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

79

Constancy of Variance

Plot the residuals against fitted (predicted) values

Check to see if size of residual is somehow associated with predicted value.

Megaphone shapes are indicative of a violation.

Bartlett’s or Levene’s Test Statistical tests are generally sensitive to

violations of normality and cannot be used if the normality assumption is not met.

Page 80: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

80

Plot: Residuals vs. Fitted Values

proc gplot data=fit; plot res*pred /vref=0;

Page 81: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

81

Checking Independence

This is the hardest assumption to check. One check on this assumption is to simply think of

how the data are collected. Ask the question: Is there anything in the collection of data that could lead to dependent responses?

Plot the residuals over time (if applicable). Is there a “drift” or other pattern as trials proceed?

Durbin-Watson Test

Page 82: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

82

Other Issues

Linearity Assumption: A nonlinear pattern in the residuals vs. predicted values plot suggests that we need to revise our assumption of a linear parametric relationship between X and Y.

Outliers: These will show up in rather obvious ways on the various plots.

Page 83: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

83

When the assumptions are violated...

Discarding data is almost always the wrong thing to do. Some things you can do are... Consider transformations of the data.

Transformations of the response variable [e.g. Log(Y)] often help with normality and/or constancy of variance issues.

Transformations of the predictor variable(s) may solve nonlinearity issues

Lastly, we may consider other more complex models.

Page 84: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

84

When outliers are present...

Some formal tests exist to classify outliers (we’ll talk about them later)

Investigate – don’t eliminate Lacking a very good reason (e.g. experimenter

made error in recording the data) you should never be throwing an outlier away.

One good thing to do is to try to figure out how much effect the outlier has on your various estimates (we’ll also learn how to do this later)

Page 85: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

85

Collaborative Learning Activity

Please discuss problem #2.6 from the handout.

Page 86: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

86

Questions?

Page 87: 1 Topic 2 – Simple Linear Regression KKNR Chapters 4 – 7.

87

Upcoming in Topic 3...

Multiple Regression Analysis

Related Reading: Chapter 8