Health Program Effect Evaluation Questions and Data Collection Methods CHSC 433 Module 5/Chapter 9...
-
Upload
edmund-harris -
Category
Documents
-
view
215 -
download
0
Transcript of Health Program Effect Evaluation Questions and Data Collection Methods CHSC 433 Module 5/Chapter 9...
Health Program Effect Evaluation Questions and Data
Collection Methods
CHSC 433CHSC 433
Module 5/Chapter 9Module 5/Chapter 9L. Michele Issel, PhD
UIC School of Public Health
Objectives
1. Develop appropriate effect evaluation questions
2. List pros and cons for various data collection methods
3. Distinguish between types of variables
Involve Evaluation Users so they can:
Judge the utility of the design Know strengths and weaknesses of
the evaluation Identify differences in criteria for
judging evaluation quality Learn about methods Have debated BEFORE have data
Terminology
The following terms are used in reference to basically the same set of activities and for the same purpose: Impact evaluation Outcome evaluation Effectiveness evaluation Summative evaluation
Differences between Research - Evaluation
Nature of problem addressed:new knowledge vs assess outcomes
Goal of the research: new knowledge for prediction vs social accounting
Guiding theory: theory for hypothesis testing vs theory for the problem
Appropriate techniques: sampling, statistics, hypothesis testing, etc. vs fit with the problem
Characteristic Research Evaluation
Goal or Purpose Generate new knowledge for prediction
Social accounting and program or policy decision making
The questions Scientist’s own questions
Derived from program goals and impact objectives
Nature of problem addressed
Areas where knowledge lacking
Assess impacts and outcomes related to program
Guiding theory Theory used as base for hypothesis testing
Theory underlying the program interventions, theory of evaluation
Research-Evaluation Differences
Characteristic Research Evaluation
Appropriate techniques
Sampling, statistics, hypothesis testing, etc.
Whichever research techniques fit with the problem
Setting Anywhere that is appropriate to the question
Usually where ever can access the program recipients and non-recipient controls
Dissemination Scientific journals Internal and externally viewed program reports, scientific journals
Allegiance Scientific community Funding source, policy preference, scientific community
Research-Evaluation Differences
Evaluation Questions…
What questions do the stakeholders want answered by the evaluation?
Do the questions link to the impact and outcome objectives?
Do the questions link to the effect theory?
From Effect Theory to Effect Evaluation
Consider the effect theory as source of variables
Consider the effect theory as guidance on design
Consider the effect theory as informing the timing of data collection
x1, x2... INTERVENTIONSplus
xa, xb...ASSET variables
Y1:Dependent-Outcome
varioables
Y1:Dependent-Impact
variables
Intervention Theory
Outcome Theory
xa, xb...DeterminiantIndependent variablesplusIntervening variables
Causative Theory
Impact Theory
xa, xb:Contributing variables, oftenconfounding, moderating ormediating
xa, xb... AntecedentIndependent variables
From Effect Theory to Variables
The next slide is an example of using the the effect theory components to identify possible variables on which to collect evaluation data.
INTERVENTIONS andGROUPSx0: Control groupx1: Prenatal vitamin groupx2: Nutrition education groupx3: Vitamins and Education
Y:Newborn
weight
Y: Prenatalanemia
hematocrt
Intervention Theory
Outcome Theory
Determiniant-Independentvariables: xa:dietary habits xb:dietary knowledge xc:iron intake xd,e,f: parity, age, income
Causative Theory
Impact Theory
Contributingvariables
(none measured)
Antecedent-independentvariables: xa: Knowledge
Impact vs Outcome Evaluations
Impact is more realistic because it focuses on the immediate effects and participants are probably more accessible.
Outcomes is more policy, longitudinal, population based and therefore more difficult and costly. Also, causality (conceptual hypothesis) is fuzzier.
Effect Evaluation
Draws upon and uses what is known about how to conduct rigorous research:
DesignDesign --overall plan, such as experimental, quasi-experimental, longitudinal, qualitative
MethodMethod -- how collect data, such as telephone survey, interview, observation
Methods --> Data Sources
Observational--> logs, video Record review--> Client records,
patient chart Survey--> participants/not, family Interview--> participants/not, Existing records --> birth & death
certificates, police reports
Comparison of Data Collection Methods
Characteristics of each method to be considered when choosing a method:
1. Cost2. Amount of training required for data
collectors3. Completion time4. Response rate
Validity and Reliability
Method must use valid indicators/measures
Method must use reliable processes for data collection
Method must use reliable measures
Variables, Indicators, Measures
VVariable is the “thing” of interest, vvariable is how that thing gets measured
Some agencies use “indicator” to mean the number that indicates how well the program is doing
Measure the way that the variable is known
It’s all just language…. Stay focused on what is needed.
Levels of Measurement
Level Examples Advantage DisadvantageNominal, Categorical
Zip code, race, yes/no
Easy to understand. Easy to understand.
Ordinal, Rank
Social class, Lickert scale, “top ten” list (worst to best)
Limited information from the data
Limited information from the data
Interval, Ratio: continuous
Temperature, IQ, distances, dollars, inches, dates of birth
Gives most information; can collapse into nominal or ordinal categories. Used as a continuous variable.
Can be difficult to construct valid and reliable interval variable
Types of Effects as documented through Indicators
Indicators of physical changeIndicators of knowledge changeIndicators of psychological changeIndicators of behavioral changeIndicators of resources changeIndicators of social change
Advise
It is more productive to focus on a few relevant variables than to go on a wide ranging fishing expedition.
Carol Weiss (1972)
Variables
Intervening variable: any variable that forms a link between the independent variable, AND without which the independent variable is not related to the dependent variable (outcome).
Variables
Confounding variable is an extraneous variable which accounts for all or part of the effects on the dependent variable (outcome); mask underlying true assumptions.
Must be associated with the dependent variable AND the independent variable.
Confounders
Exogenous Exogenous (outside of individuals)
confoundingconfounding factors are uncontrollable (selection bias, coverage bias).
Endogenous Endogenous (within individuals) confoundingconfounding factors equally important: secular drift in attitudes/knowledge, maturation (children or elderly), seasonality, interfering events that alter individuals.
Variable story…
To get from Austin to San Antonio, there is one highway. Between Austin and San Antonio there is one town, San Marcus.
San Marcus is the intervening variable because it not possible to get to San Antonio from Austin without going through San Marcus.
The freeway is often congested, with construction and heavy traffic. The highway conditions is the confounding variable because it is associated with both the trip (my car, my state of mind) and with arriving (alive) in San Antonio.
Measure Program Impact Across the Pyramid
Direct Health Care
Services____________________
Enabling Services___________________________
Population-Based Services___________________________________
Infrastructure Services