Practical Evaluation: CDC’s Framework for Program Evaluation in Public Health Thomas J. Chapel,...
-
Upload
abner-mccarthy -
Category
Documents
-
view
212 -
download
0
Transcript of Practical Evaluation: CDC’s Framework for Program Evaluation in Public Health Thomas J. Chapel,...
Practical Evaluation: CDC’s Framework for Program Evaluation in Public Health
Thomas J. Chapel, MA, MBACenters for Disease Control and PreventionOffice of Workforce and Career [email protected]
Why We Evaluate…
“... The gods condemned Sisyphus to endlessly roll a rock up a hill, whence it would return each time to its starting place. They thought, with some reason…
Why We Evaluate…
…there was no punishment more severe than eternally futile labor....”
The Myth of Sisyphus
Objectives
CDC Framework steps and standards
Central importance of early steps How Framework approach helps
ensure use and avoid roadblocks
Defining Evaluation
Evaluation is the systematic investigation of the merit, worth, or significance of any “object”
Michael Scriven
Program is any organized public health action/activity implemented to achieve some result
These must be integrated…
Continuous Quality Improvement (CQI) cycle. Planning—What actions
will best reach our goals and objectives.
Performance measurement— How are we doing?
Evaluation—Why are we doing well or poorly?
What do What do we do?we do?
Why are Why are we we
doing doing well or well or poorly?poorly?
How are How are we we
doing?doing?
How do we How do we do it?do it?
Enter the CDC Evaluation Framework
7
Enter the CDC Evaluation Framework
8
Good evaluation = use of findings
Enter the CDC Evaluation Framework
9
Good evaluation = use of findings
Focus is situation -specific
Enter the CDC Evaluation Framework
10
Good evaluation = use of findings
Focus is situation -specific
Early steps key to best
focus
The Four Standards
No one “right” evaluation. Instead, best choice at each step is options that maximize:
Utility: Who needs the info from this evaluation and what info do they need?
Feasibility: How much money, time, and effort can we put into this?
Propriety: Who needs to be involved in the evaluation to be ethical?
Accuracy: What design will lead to accurate information?
11
Good results from Steps 4-6 more likely because we did a good job on Steps 1-3!!!
You Don’t Ever Need a Logic Model, BUT, You Always Need a Program Description
Don’t jump into planning or eval without clarity on: The big “need” your program is to address The key target group(s) who need to take
action The kinds of actions they need to take (your
intended outcomes or objectives) Activities needed to meet those outcomes “Causal” relationships between activities and
outcomes
Logic Models and Program Description Logic Models : Graphic
depictions of the relationship between your program’s activities and its intended effects
Step 2: Describing the Program:Complete Logic Model
Activities Inputs Outputs
Intermediate Effects/
Outcomes
Short-term Effects/
Outcomes
Long-term Effects/
Outcomes
ContextAssumptions
Stage of Development
Activities Inputs Outputs
Intermediate Effects/
Outcomes
Short-term Effects/
Outcomes
Long-term Effects/
Outcomes
ContextAssumptions
Stage of Development
What the program and its staff actually do
Activities Inputs Outputs
Intermediate Effects/
Outcomes
Short-term Effects/
Outcomes
Long-term Effects/
Outcomes
ContextAssumptions
Stage of Development
Results of activities: Who/what will change?
Linking Planning, Evaluation and Performance Measurement
GoalsObjectivesActions/Tactics
Activities ST or MT Outcomes
LT Outcomes or Impacts
Process MeasuresProgress Measures
Impl. Measures
Outcome MeasuresImpact Measures
Key Performance IndicatorsSuccess Factors
Pla
nE
val
PM
Activities Inputs Outputs
Intermediate Effects/
Outcomes
Short-term Effects/
Outcomes
Long-term Effects/
Outcomes
ContextAssumptions
Stage of Development
Resource “platform” for the program
Tangible products of activities
Activities Inputs Outputs
Intermediate Effects/
Outcomes
Short-term Effects/
Outcomes
Long-term Effects/
Outcomes
ContextAssumptions
Stage of Development
Moderators: Contextual factors that will facilitate or hinder getting our outcomes
Forgetting Intermediate Outcomes
Good evaluation and performance measurement require a broad(er) focus:
Not just: Did it work?
But also: Is it working?
How many tomatoes did I get?
Are planting, watering, and
weeding taking place?
Are there nematodes
on the plants?
Have the blossoms “set”?
Stakeholders—Who is…
Affected by the program?Involved in program operations?Intended user(s) of evaluation
findings?
Lessons Learned—Stakeholder Engagement
Lessons Learned—Stakeholder Engagement
“All the animals in the barnyard are equal…only some are more equal than others.”
George Orwell, Animal Farm
Which Matter Most?
Who is:Affected by the program?Involved in program operations?Intended users of evaluation findings?
Of these, who do we most need to:Enhance credibility?Implement program changes?Advocate for changes?Fund, authorize, expand program?
Evaluation Can Be About Anything Evaluation can focus on any/all parts of
the logic model Evaluation questions can pertain to
Boxes---did this component occur as expected
Arrows---what was the relationship between components
Activities Inputs Outputs
Intermediate Effects/
Outcomes
Short-term Effects/
Outcomes
Long-term Effects/
Outcomes
ContextAssumptions
Stage of Development
Were activities and outputs implemented as intended? How much? Who received?
Did we get the inputs we needed/were promised?
Process Evaluation
Activities Inputs Outputs
Intermediate Effects/
Outcomes
Short-term Effects/
Outcomes
Long-term Effects/
Outcomes
ContextAssumptions
Stage of Development
Which outcomes occurred? How much outcome occurred
OutcomeEvaluation
Activities Inputs Outputs
Intermediate Effects/
Outcomes
Short-term Effects/
Outcomes
Long-term Effects/
Outcomes
ContextAssumptions
Stage of Development
(How) was implementation quality related to inputs?
EfficiencyEvaluation
Activities Inputs Outputs
Intermediate Effects/
Outcomes
Short-term Effects/
Outcomes
Long-term Effects/
Outcomes
ContextAssumptions
Stage of Development
Did outcomes occur because of our activities and outputs?
Causal Attribution
“Utility” Standard
Purpose: Toward what end is the measurement being conducted?
User: Who wants the info and what are they interested in?
Use: How will they use the info?
“Feasibility” Standard
Acts as “reality check” : Stage of Development: How long
has the program been in existence? Program Intensity: How intense is
the program? How much impact is reasonable to expect?
Resources: How much time, money, expertise are available?
Utility—(Some) Potential Uses Show accountability Test/improve program effectiveness Test/improve program
implementation “Continuous” program improvement Increase the knowledge base Other… Other…
Appropriate Rigor
Don’t make the “perfect” the enemy of the “good”
Dr. Bill Foege, former CDC Director
Measuring the Right Thing…
“…Sometimes, what counts can’t be counted. And what can be counted doesn’t count….”
Albert Einstein
You Get What You Measure…
“…In Poland in the 1970s, furniture factories were rewarded based on pounds of product shipped. As a result, today Poles have the world’s heaviest furniture…”
(New York Times, 3/4/99)
Helpful Publications @ www.cdc.gov/eval
38
Helpful Resources Intro to Program Evaluation for PH Programs—A Self-
Study Guide: http://www.cdc.gov/eval/whatsnew.htm Logic Model Sites
Innovation Network: http://www.innonet.org/ W.K. Kellogg Foundation Evaluation Resources:
http://www.wkkf.org/programming/overview.aspx?CID=281
University of Wisconsin-Extension: http://www.uwex.edu/ces/lmcourse/
Texts Rogers et al. Program Theory in Evaluation. New
Directions Series: Jossey-Bass, Fall 2000 Chen, H. Theory-Driven Evaluations. Sage. 1990
Community Tool Boxhttp://ctb.ku.edu
40