Post on 11-Jan-2016
description
Program Evaluation
The use of scientific methods to judge and improve the planning, monitoring,
effectiveness, and efficiency of health, nutrition, and other human service programs
Why Evaluate a Program?
• See table 10-1, page 309, Boyles and Morris
Types of Program Evaluation
• Process evaluation
• Impact or outcome evaluation
• Fiscal or efficiency evaluation
Process Evaluation
• Evaluate process objectives
• Provides information for why program may or may not have reached its outcome objectives
• If program is delivered from a variety of sites, provides information on why some sites may have been more successful than others
Six Steps for Program Evaluation
• 1. Determine objectives of program
• Evaluate for:
• a. Appropriateness of objectives
• b. Effectiveness in meeting objectives
• c. Efficiency of program
• d. Side effects of program
Steps in Program Evaluation
• 2. Determine characteristics to be measured
• Measurements should be:
• Valid
• Reliable
• Precise
Steps in Program Evaluation
• 3. Measure characteristics
Steps in Program Evaluation
• 4. Make comparisons
• May use:
• Control groups
• Similar groups
• Standards
• Pre vs post measurements
Steps in Program Evaluation
• 5. Draw conclusions
• 6. Make recommendations
Common Biases Introduced During Evaluations
• Selection
• Testing
• History
• Maturation
• Halo effect
Evaluation Design
• 1. Experimental design
• 2. Quasi-experimental design
• 3. Non-experimental design
Steps for experimental design
• 1. Experimental and control groups randomly assigned
• 2. Each group measured
• 3. Intervention or program provided
• 4. Groups measured again--if experimental group improved more than control, program was successful
Examples of Designs of True Experiments
• Pre-test post-test control group design
• R O X O
• R O O
Examples of Designs of True Experiments
• After only control group
• R XO
• R O
Examples of Experimental Design
• Solomon 4 group
• R O X O
• R O O
• R X O
• R O
Quasi-experimental design
• Steps similar to experimental, but rigid control not met.
• Random selection may not be done
• Subjects may be volunteers
• Nonequivalent control groups may be used
Nonexperimental design
• Random selection not used
• No control group or nonequivalent control group used
Examples of Non-experimental design
• After only or one-shot case study
• X O
• Nonequivalent control group study
• X O
• O
• Pre-test-Post-test design
• O X O
Fiscal or Efficiency Evaluations
• Cost-benefit analysis
• Cost-effectiveness analysis
Cost benefit analysis
• Decision making framework used in allocating resources among competing uses.
• Both costs and benefits are expressed in dollars
Costs
• Direct Costs --Cash expenditures
• Indirect Costs
– All other costs such as
– Spillover effects
– Costs to client
– Costs to organization not covered by program
• Opportunity costs
• Intangible costs--grief, suffering pain
Benefits
• All costs that would be avoided if the program were in effect
• Direct benefits--values of resources which the program saves
• Negative benefits
• Indirect benefits--other costs averted
• Intangible benefits--happiness, bonding from breastfeeding
Discount rate
• Based on deferred benefits
Cost-Benefit Analysis of Attending School
• Direct costs
• Indirect costs
• Intangible costs
• Direct benefits
• Indirect benefits
• Intangible benefits
• Discount rates
Cost effectiveness analysis
• Determines the most efficient way of meeting a predetermined set of objectives
• Costs measured in dollars
• Effectiveness measured by outcomes, e.g.. lives saved, increase in birth weight, etc
Communicating Evaluation Results
• See pages 322-326