Post on 19-Jan-2016
description
Ch.8: Nonlinear Regression Functions
and 11.1: Linear Probability Model
Econ 141 Spring 2014
Lecture: April 14, 2014
Bart Hobijn
4/14/2014 Econ 141, Spring 2014 1
The views expressed in these lecture notes are solely those of the instructor and do not necessarily
reflect those of the UC Berkeley, or other institutions with which he is affiliated.
Threats to internal validity
Source of bias Solution Where
covered?
Internal invalidity: πΈ πΏβ²π πΏ β π β¦ biased and inconsistent estimates
Omitted variable bias β’ Include control variables in
regression
β’ Fixed effects in panel data
Ch.7
Ch. 10
Misspecification of functional form β’ Choose functional form
that better fits the data
Ch.8
Errors-in-variables bias β’ Get more precisely
measured data
β’ Instrumental variables
Data source
Ch. 12
Sample selection bias β’ Tobit model
β’ Heckman correction, etc.
Beyond the
scope of class
Simultaneous causality bias β’ Instrumental variables
β’ Set up an experiment
Ch. 12
Ch. 13
4/14/2014 Econ 141, Spring 2014 2
Outline of lecture
β’ Population regression equation as a Taylor
approximation
β Polynomials (S&W page 263)
β Interactions between explanatory variables (S&W page 274)
β’ Binary variables in the Taylor approximation context
β Dummy variables
β Interactions that include dummy variables
β’ Using binary variable as dependent variable
β’ Levels versus logarithms
4/14/2014 Econ 141, Spring 2014 3
A general framework
Consider a general relationship between the
dependent variable, ππ, and the explanatory
variables, πππ, where π = 1,β¦ , π and π = 1,β¦ , π.
ππ = π π1π , β¦ , πππ + π’π
Where we assume that the function π . is well-
behaved (continuously differentiable)
4/14/2014 Econ 141, Spring 2014 4
Linear regression model as a
first-order Taylor approximation
The first-order Taylor approximation of this
expression around the sample mean gives
ππ β π π 1, β¦ , π π + π
ππ ππ π 1, β¦ , π π πππ β π π
π
π=1+ π’π
= π π 1, β¦ , π π β π
ππ ππ π 1, β¦ , π π π π
π
π=1
+ π
ππ ππ π 1, β¦ , π π πππ
π
π=1+ π’π
4/14/2014 Econ 141, Spring 2014 5
Linear regression model as a
first-order Taylor approximation
The first-order Taylor approximation of this
expression around the sample mean gives
ππ β π π 1, β¦ , π π + π
ππ ππ π 1, β¦ , π π πππ β π π
π
π=1+ π’π
= π π 1, β¦ , π π β π
ππ ππ π 1, β¦ , π π π π
π
π=1
+ π
ππ ππ π 1, β¦ , π π πππ
π
π=1+ π’π
= π½0 + π½ππππ
π
π=1+ π’π
4/14/2014 Econ 141, Spring 2014 6
Linear regression model as a
first-order Taylor approximation
The first-order Taylor approximation of this
expression around the sample mean gives
ππ β π π 1, β¦ , π π + π
ππ ππ π 1, β¦ , π π πππ β π π
π
π=1+ π’π
= π π 1, β¦ , π π β π
ππ ππ π 1, β¦ , π π π π
π
π=1
+ π
ππ ππ π 1, β¦ , π π πππ
π
π=1+ π’π
= π½0 + π½ππππ
π
π=1+ π’π
4/14/2014 Econ 141, Spring 2014 7
π·π measures the
marginal effect
of πΏππ on ππ
When does this work?
First-order Taylor approximation is accurate
when
β’ Second-order derivatives π2
ππ ππ ππ π 1, β¦ , π π
are relatively small compared to the range
over which πππ β π π is evaluated.
If not, higher-order approximation necessary.
4/14/2014 Econ 141, Spring 2014 8
Example: A Mincer regression
Consider the relationship between the log of a
workerβs hourly wage, π€π, and his or her
potential work experience, ππ, defined by age
minus years of education minus 6.
Letβs run a linear regression of the type
π€π = π½0 + π½1ππ + π’π
4/14/2014 Econ 141, Spring 2014 9
Example: A Mincer regression . regress lnhrwage exper
Source | SS df MS Number of obs = 3854
-------------+------------------------------ F( 1, 3852) = 13.16
Model | 6.40584203 1 6.40584203 Prob > F = 0.0003
Residual | 1875.08007 3852 .486780912 R-squared = 0.0034
-------------+------------------------------ Adj R-squared = 0.0031
Total | 1881.48592 3853 .488317134 Root MSE = .6977
------------------------------------------------------------------------------
lnhrwage | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
exper | .0035066 .0009666 3.63 0.000 .0016114 .0054017
_cons | 2.91992 .0273969 106.58 0.000 2.866206 2.973634
------------------------------------------------------------------------------
Though coefficient on experience is significant, plot of
fitted regression line reveals non-linearity.
4/14/2014 Econ 141, Spring 2014 10
Example: A Mincer regression
4/14/2014 Econ 141, Spring 2014 11
12
34
5
log
hou
rly w
age
0 20 40 60years of potential experience
Log of the hourly wage Fitted values
Example: A Mincer regression
4/14/2014 Econ 141, Spring 2014 12
12
34
5
log
hou
rly w
age
0 20 40 60years of potential experience
Log of the hourly wage Fitted values
Example: second-order approximation
ππ β π π 1, β¦ , π π + π
ππ ππ π 1, β¦ , π π πππ β π π
π
π=1+
1
2
π2
ππ ππ ππ π 1, β¦ , π π πππ β π π πππ β π π
π
π=1
π
π=1+ π’π
= π π 1, β¦ , π π β π
ππ ππ π 1, β¦ , π π π π
π
π=1+
1
2
π2
ππ ππ ππ π 1, β¦ , π π π ππ π
π
π=1
π
π=1
+ π
ππ ππ π 1, β¦ , π π β
π2
ππ ππ ππ π 1, β¦ , π π π π
π
π=1πππ
π
π=1
+ 1
2
π2
ππ ππ ππ π 1, β¦ , π π ππππππ
π
π=1+ π’π
π
π=1
= π½0 + π½π
π
π=1πππ + πΎππππππππ
π
π=1
π
π=1+ π’π
4/14/2014 Econ 141, Spring 2014 13
Example: second-order approximation
ππ β π π 1, β¦ , π π + π
ππ ππ π 1, β¦ , π π πππ β π π
π
π=1+
1
2
π2
ππ ππ ππ π 1, β¦ , π π πππ β π π πππ β π π
π
π=1
π
π=1+ π’π
= π π 1, β¦ , π π β π
ππ ππ π 1, β¦ , π π π π
π
π=1+
1
2
π2
ππ ππ ππ π 1, β¦ , π π π ππ π
π
π=1
π
π=1
+ π
ππ ππ π 1, β¦ , π π β
π2
ππ ππ ππ π 1, β¦ , π π π π
π
π=1πππ
π
π=1
+ 1
2
π2
ππ ππ ππ π 1, β¦ , π π ππππππ
π
π=1+ π’π
π
π=1
= π½0 + π½π
π
π=1πππ + πΎππππππππ
π
π=1
π
π=1+ π’π
4/14/2014 Econ 141, Spring 2014 14
Example: second-order approximation
ππ β π π 1, β¦ , π π + π
ππ ππ π 1, β¦ , π π πππ β π π
π
π=1+
1
2
π2
ππ ππ ππ π 1, β¦ , π π πππ β π π πππ β π π
π
π=1
π
π=1+ π’π
= π π 1, β¦ , π π β π
ππ ππ π 1, β¦ , π π π π
π
π=1+
1
2
π2
ππ ππ ππ π 1, β¦ , π π π ππ π
π
π=1
π
π=1
+ π
ππ ππ π 1, β¦ , π π β
π2
ππ ππ ππ π 1, β¦ , π π π π
π
π=1πππ
π
π=1
+ 1
2
π2
ππ ππ ππ π 1, β¦ , π π ππππππ
π
π=1+ π’π
π
π=1
= π½0 + π½π
π
π=1πππ + πΎππππππππ
π
π=1
π
π=1+ π’π
4/14/2014 Econ 141, Spring 2014 15
Interactions between
explanatory
variables naturally
occur in higher-order
multivariate
approximations
Example: second-order approximation
ππ β π π 1, β¦ , π π + π
ππ ππ π 1, β¦ , π π πππ β π π
π
π=1+
1
2
π2
ππ ππ ππ π 1, β¦ , π π πππ β π π πππ β π π
π
π=1
π
π=1+ π’π
= π π 1, β¦ , π π β π
ππ ππ π 1, β¦ , π π π π
π
π=1+
1
2
π2
ππ ππ ππ π 1, β¦ , π π π ππ π
π
π=1
π
π=1
+ π
ππ ππ π 1, β¦ , π π β
π2
ππ ππ ππ π 1, β¦ , π π π π
π
π=1πππ
π
π=1
+ 1
2
π2
ππ ππ ππ π 1, β¦ , π π ππππππ
π
π=1+ π’π
π
π=1
= π½0 + π½π
π
π=1πππ + πΎππππππππ
π
π=1
π
π=1+ π’π
4/14/2014 Econ 141, Spring 2014 16
Marginal effect of πΏππ
on ππ depends on
interaction terms with
other variables
Example: A Mincer regression
The relationship between the log of a workerβs
hourly wage, π€π, and his or her potential work
experience, ππ, is not approximately linear over
the range of ππ observed in the data.
Letβs add a second-order to to the regression
π€π = π½0 + π½1ππ + π½2ππ2 + π’π
4/14/2014 Econ 141, Spring 2014 17
Example: A Mincer regression . regress lnhrwage exper exper2
Source | SS df MS Number of obs = 3854
-------------+------------------------------ F( 2, 3851) = 72.20
Model | 68.0004527 2 34.0002264 Prob > F = 0.0000
Residual | 1813.48546 3851 .47091287 R-squared = 0.0361
-------------+------------------------------ Adj R-squared = 0.0356
Total | 1881.48592 3853 .488317134 Root MSE = .68623
------------------------------------------------------------------------------
lnhrwage | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
exper | .0504624 .0042144 11.97 0.000 .0421998 .058725
exper2 | -.000834 .0000729 -11.44 0.000 -.000977 -.0006911
_cons | 2.376179 .0546489 43.48 0.000 2.269036 2.483323
------------------------------------------------------------------------------
Second-order term highly significant
4/14/2014 Econ 141, Spring 2014 18
Example: A Mincer regression
4/14/2014 Econ 141, Spring 2014 19
12
34
5
log
hou
rly w
age
0 20 40 60years of potential experience
Log of the hourly wage Fitted values
Example: A Mincer regression
But a workerβs hourly wage, π€π, does not only
depend on potential work experience, ππ. It also
depends on the years of schooling, ππ. Letβs
estimate a second order polynomial in the two
explanatory variables
Letβs add a second-order to to the regression π€π = π½0 + π½1ππ + π½2ππ + π½3ππ
2 + π½4ππππ + π½5ππ2 + π’π
4/14/2014 Econ 141, Spring 2014 20
Example: A Mincer regression . regress lnhrwage exper schoolyrs exper2 schoolyrs2 schoolyrsexper
Source | SS df MS Number of obs = 3854
-------------+------------------------------ F( 5, 3848) = 244.13
Model | 453.102378 5 90.6204756 Prob > F = 0.0000
Residual | 1428.38354 3848 .371201543 R-squared = 0.2408
-------------+------------------------------ Adj R-squared = 0.2398
Total | 1881.48592 3853 .488317134 Root MSE = .60926
------------------------------------------------------------------------------
lnhrwage | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
exper | .0657975 .0066534 9.89 0.000 .0527529 .078842
schoolyrs | .0831624 .0211874 3.93 0.000 .0416229 .1247019
exper2 | -.0006885 .0000698 -9.87 0.000 -.0008252 -.0005517
schoolyrs2 | .0022569 .0005858 3.85 0.000 .0011084 .0034054
schoolyrse~r | -.0011937 .0002909 -4.10 0.000 -.001764 -.0006234
_cons | .5708724 .2155551 2.65 0.008 .1482593 .9934856
------------------------------------------------------------------------------
4/14/2014 Econ 141, Spring 2014 21
Example: A Mincer regression
Following S&W page 264 and test for joint
insignificance of higher-order terms.
. test (exper2 = 0) (schoolyrs2 = 0) (schoolyrsexper = 0)
( 1) exper2 = 0
( 2) schoolyrs2 = 0
( 3) schoolyrsexper = 0
F( 3, 3848) = 43.71
Prob > F = 0.0000
They are very significant!
4/14/2014 Econ 141, Spring 2014 22
Example: A Mincer regression
Use the lincom command to estimate
marginal effect. Return to a year more
experience for someone with 20 years of
experience and 12 years of education. Evaluate
π½1 + 2π½3ππ + π½4ππ at ππ = 12 and ππ = 20
. lincom (exper + 2*exper2*20 + schoolyrsexper*12 )
( 1) exper + 40*exper2 + 12*schoolyrsexper = 0
------------------------------------------------------------------------------
lnhrwage | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
(1) | .0239336 .0017308 13.83 0.000 .0205403 .027327
------------------------------------------------------------------------------
4/14/2014 Econ 141, Spring 2014 23
Some points related to polynomials
Second-order approximation is only special case. Higher order
approximations lead to higher order polynomial regression
equations.
β’ Always include lower order terms when including higher
order terms of polynomial.
β’ Can use standard tests to figure out right order of
polynomials (S&W page 264)
β’ Though higher-order polynomials lead to better
approximations, they have harder to interpret marginal
effects and estimation has lower number of degrees of
freedom.
β’ Beware of the units of measurement related to the higher-
order coefficients. Often best to present marginal effects.
4/14/2014 Econ 141, Spring 2014 24
Dummies in Taylor approximation
Revisit the first-order Taylor approximation underlying
the linear regression model
ππ = π π 1, β¦ , π π β π
ππ ππ π 1, β¦ , π π π π
π
π=1+
π
ππ ππ π 1, β¦ , π π πππ
π
π=1+ π’π
= π½0 + π½ππππ
π
π=1+ π’π
Suppose we have two different subgroups. Letβs
consider two cases
1. They have different π π 1, β¦ , π π
2. They have different π
ππ ππ π 1, β¦ , π π for some π.
4/14/2014 Econ 141, Spring 2014 25
Dummies in Taylor approximation
Revisit the first-order Taylor approximation underlying
the linear regression model
ππ = π π 1, β¦ , π π β π
ππ ππ π 1, β¦ , π π π π
π
π=1+
π
ππ ππ π 1, β¦ , π π πππ
π
π=1+ π’π
= π½0 + π½ππππ
π
π=1+ π’π
Suppose we have two different subgroups. Letβs
consider two cases
1. They have different π π 1, β¦ , π π
2. They have different π
ππ ππ π 1, β¦ , π π for some π.
4/14/2014 Econ 141, Spring 2014 26
Case 1:
Different π π 1, β¦ , π π for different groups.
This would mean that the intercept is
different across groups.
Include dummy to allow intercept to vary
by group.
(S&W equation 8.32)
Dummies in Taylor approximation
Revisit the first-order Taylor approximation underlying
the linear regression model
ππ = π π 1, β¦ , π π β π
ππ ππ π 1, β¦ , π π π π
π
π=1+
π
ππ ππ π 1, β¦ , π π πππ
π
π=1+ π’π
= π½0 + π½ππππ
π
π=1+ π’π
Suppose we have two different subgroups. Letβs
consider two cases
1. They have different π π 1, β¦ , π π
2. They have different π
ππ ππ π 1, β¦ , π π for some π.
4/14/2014 Econ 141, Spring 2014 27
Case 2:
Different π
ππ ππ π 1, β¦ , π π for some π for
different groups.
This would mean that the slope
coefficient is different across groups.
Include dummy explanatory variable
interaction to allow slope coefficient to
vary by group.
(S&W equation 8.32)
Dummies in Taylor approximation
Revisit the first-order Taylor approximation underlying
the linear regression model
ππ = π π 1, β¦ , π π β π
ππ ππ π 1, β¦ , π π π π
π
π=1+
π
ππ ππ π 1, β¦ , π π πππ
π
π=1+ π’π
= π½0 + π½ππππ
π
π=1+ π’π
Suppose we have two different subgroups. Letβs
consider two cases
1. They have different π π 1, β¦ , π π
2. They have different π
ππ ππ π 1, β¦ , π π for some π.
4/14/2014 Econ 141, Spring 2014 28
Case 2:
However, π
ππ ππ π 1, β¦ , π π also affects the
intercept.
Thus, you would also include dummy to
allow intercept to vary by group.
(S&W equation 8.32)
Dummies in Taylor approximation
Revisit the first-order Taylor approximation underlying
the linear regression model
ππ = π π 1, β¦ , π π β π
ππ ππ π 1, β¦ , π π π π
π
π=1+
π
ππ ππ π 1, β¦ , π π πππ
π
π=1+ π’π
= π½0 + π½ππππ
π
π=1+ π’π
Suppose we have two different subgroups. Letβs
consider two cases
1. They have different π π 1, β¦ , π π
2. They have different π
ππ ππ π 1, β¦ , π π for some π.
4/14/2014 Econ 141, Spring 2014 29
Stock and Watons equation (8.33) that only includes
dummy explanatory variable interaction is not often
used
Generally, in that case, there is also just the dummy
to allow for a different constant term
Panel (c) from figure 8.8 and bullet 3 from Key
Concept 8.4 are not often applied. Nested in panel (b)
and bullet 2
Example: A Mincer regression
Is the return to education different for women?
Let π·π = 1 if individual π is female.
Add appropriate dummy variable and dummy
interaction variables to test this hypothesis π€π = π½0 + π½1π·π + π½2ππ + π½3ππ + π½4πππ·π + π½5ππ
2 + π½6ππππ
+ π½7πππππ·π + π½8ππ2 + π½8ππ
2π·π + π’π
4/14/2014 Econ 141, Spring 2014 30
Example: A Mincer regression . regress lnhrwage female exper schoolyrs schoolyrsfem exper2 schoolyrs2 schoolyrsfem2
schoolyrsexper schoolyrsfemexper
Source | SS df MS Number of obs = 3854
-------------+------------------------------ F( 9, 3844) = 148.19
Model | 484.646812 9 53.8496458 Prob > F = 0.0000
Residual | 1396.8391 3844 .363381661 R-squared = 0.2576
-------------+------------------------------ Adj R-squared = 0.2558
Total | 1881.48592 3853 .488317134 Root MSE = .60281
------------------------------------------------------------------------------
lnhrwage | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
female | -.2583408 .1758799 -1.47 0.142 -.6031675 .086486
exper | .0697711 .0066445 10.50 0.000 .0567441 .0827981
schoolyrs | .0814035 .0231805 3.51 0.000 .0359563 .1268507
schoolyrsfem | .0421913 .0265727 1.59 0.112 -.0099066 .0942893
exper2 | -.000702 .0000694 -10.12 0.000 -.000838 -.0005659
schoolyrs2 | .0026695 .0007058 3.78 0.000 .0012858 .0040533
schoolyrsf~2 | -.0019102 .0010155 -1.88 0.060 -.0039012 .0000808
schoolyrse~r | -.0013209 .0002921 -4.52 0.000 -.0018936 -.0007481
schoolyrsf~r | -.0002773 .0001192 -2.33 0.020 -.000511 -.0000435
_cons | .5316936 .2220393 2.39 0.017 .0963676 .9670197
------------------------------------------------------------------------------
4/14/2014 Econ 141, Spring 2014 31
Example: A Mincer regression
π€π = π½0 + π½1π·π + π½2ππ + π½3ππ + π½4πππ·π + π½5ππ2 + π½6ππππ
+ π½7πππππ·π + π½8ππ2 + π½8ππ
2π·π + π’π
Are the added terms insignificant and can we not reject
the null-hypothesis that returns to education are the same
for women and men?
. test (schoolyrsfem = 0) (schoolyrsfem2 = 0) (schoolyrsfemexp = 0)
( 1) schoolyrsfem = 0
( 2) schoolyrsfem2 = 0
( 3) schoolyrsfemexper = 0
F( 3, 3844) = 3.72
Prob > F = 0.0110
4/14/2014 Econ 141, Spring 2014 32
Linear probability model
How about having a dummy as a dependent
variable, ππ?
β’ This is called the Linear Probability Model. (S&W 11.1)
β’ OLS gives unbiased and consistent estimates
of the parameters in that case.
β’ Problem is that this often results in fitted
values π π > 1 or π π < 0.
β’ Logit and Probit models are popular non-
linear alternatives. (S&W 11.2-5 use MLE. Not covered in this class)
4/14/2014 Econ 141, Spring 2014 33
Logs or levels?
Two things to bear in mind when deciding
between logs or levels in regression
1. Quality of the fit of the regression for chosen
functional form.
2. Interpretation of the regression coefficients
β Sensitivity to units of measurement
β Level effect versus elasticity (S&W 265-273)
Will use Mincer regression to discuss these
considerations here
4/14/2014 Econ 141, Spring 2014 34
Logs and percentage changes
Another first order Taylor approximation!
ln π₯ β ln 1 +π ln π₯
ππ₯π₯=1
π₯ β 1
= 0 +π₯ β 1
1 for π₯ β 1
Let π₯ =π’
π£ β 1, then
ln π’ β ln π£ = lnπ’
π£β
π’π£β 1
1=
π’ β π£
π£
So, differences in logarithms are
β’ Approximate percentage changes.
β’ Do not depend on the units of measurement of π’ and π£ as long as
it is the same.
4/14/2014 Econ 141, Spring 2014 35
Example: A Mincer regression
Consider the level versus the log cases here:
β’ Level of hourly wage, ππ.
β’ Log level of hourly wage, π€π = ln ππ
We can run the regression:
β’ In levels: ππ = π½0 + π½1ππ + π’π
β’ In logs: π€π = π½0 + π½1ππ + π’π
Why would we choose to run it in logs?
4/14/2014 Econ 141, Spring 2014 36
Level regression yields dubious fit
4/14/2014 Econ 141, Spring 2014 37
05
01
00
150
200
leve
l of h
ou
rly w
age
0 20 40 60years of potential experience
Hourly wage Fitted values
Parameter estimate hard to interpret
. regress wage exper
Source | SS df MS Number of obs = 3854
-------------+------------------------------ F( 1, 3852) = 15.73
Model | 9951.00275 1 9951.00275 Prob > F = 0.0001
Residual | 2436801.34 3852 632.606786 R-squared = 0.0041
-------------+------------------------------ Adj R-squared = 0.0038
Total | 2446752.34 3853 635.025264 Root MSE = 25.152
------------------------------------------------------------------------------
wage | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
exper | .1382063 .0348467 3.97 0.000 .0698866 .206526
_cons | 23.052 .9876471 23.34 0.000 21.11564 24.98836
------------------------------------------------------------------------------
4/14/2014 Econ 141, Spring 2014 38
Parameter estimate hard to interpret
. regress wage exper
Source | SS df MS Number of obs = 3854
-------------+------------------------------ F( 1, 3852) = 15.73
Model | 9951.00275 1 9951.00275 Prob > F = 0.0001
Residual | 2436801.34 3852 632.606786 R-squared = 0.0041
-------------+------------------------------ Adj R-squared = 0.0038
Total | 2446752.34 3853 635.025264 Root MSE = 25.152
------------------------------------------------------------------------------
wage | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
exper | .1382063 .0348467 3.97 0.000 .0698866 .206526
_cons | 23.052 .9876471 23.34 0.000 21.11564 24.98836
------------------------------------------------------------------------------
4/14/2014 Econ 141, Spring 2014 39
What does this 0.13 mean?
It depends on the units of measurement of the
hourly wage.
Is this additional dollars or cents earned per hour
per year of experience?
For what year is this data? A dollar in 2000 is a
different thing from a dollar in 2014 (inflation)
Units of measurement affect
levels of variables
Suppose we measure ππ not in dollars but in cents.
Such that ππβ = 100ππ. Then regression equation in
levels goes from
ππ = π½0 + π½1ππ + π’π
To
ππβ = π½0
β + π½1βππ + π’π
β
Where π½0β = 100π½0, π½1
β = 100π½1, and π’πβ = 100π’π
4/14/2014 Econ 141, Spring 2014 40
Units of measurement affect
levels of variables
Suppose we measure ππ not in dollars but in cents.
Such that ππβ = 100ππ. Then regression equation in
levels goes from
ππ = π½0 + π½1ππ + π’π
To
ππβ = π½0
β + π½1βππ + π’π
β
Where π½0β = 100π½0, π½1
β = 100π½1, and π’πβ = 100π’π
4/14/2014 Econ 141, Spring 2014 41
The magnitude of the slope coefficients
and the residuals thus depends on the
units of measurement of the dependent
(or explanatory) variables if they are
included in levels
Logs not subject to units of
measurement issue
Suppose we measure ππ not in dollars but in cents. Such
that π€π
β = lnππβ = ln 100ππ = ln 100 + ln ππ = ln 100 + π€π
Then regression equation in logs goes from
π€π = π½0 + π½1ππ + π’π
To
π€πβ = π½0
β + π½1ππ + π’π
Where π½0β = ln 100 + π½0.
Thus, slope coefficient and residuals not affected units of
measurement when log is included rather than level.
4/14/2014 Econ 141, Spring 2014 42