PPA 502 – Program Evaluation Lecture 1 – Introduction, Evaluability Assessment.

27
PPA 502 – Program Evaluation Lecture 1 – Introduction, Evaluability Assessment
  • date post

    21-Dec-2015
  • Category

    Documents

  • view

    217
  • download

    0

Transcript of PPA 502 – Program Evaluation Lecture 1 – Introduction, Evaluability Assessment.

PPA 502 – Program Evaluation

Lecture 1 – Introduction, Evaluability Assessment

Introduction

Program evaluation is the use of social research methods to systematically investigate the effectiveness of social intervention programs.– Draws on techniques and concepts of social

science disciplines– Intended to be used for improving programs

and informing social action aimed at ameliorating social problems.

Introduction

Modern evaluation research grew from pioneering efforts in the 1930s and burgeoned in the post-war years as new methodologies were developed.– The social policy and public administration

movements have contributed to the professionalization of the field and to the sophistication of the consumers of evaluation research.

Introduction

The need for program evaluation is undiminished in the 1990s and may even be expected to grow.– Contemporary concern over the allocation of

scarce resources makes it more essential than ever to evaluate the effectiveness of social interventions.

Introduction Evaluation must be tailored to the political and

organizational context of the program to be evaluated.

The assessment of one or more program domains:– The need for the program– The design of the program– The program implementation and service delivery– The program impact or outcomes– Program efficiency

Accurate description of program performance and assessment against relevant standards or criteria.

Introduction

Program evaluation presents many challenges to the evaluator– Changes in circumstances and activities during

an evaluation.– Appropriate balance between science and

pragmatism.– Diversity of perspectives and approaches.

Introduction

Most evaluators are trained as social scientists or social researchers.

Complex evaluations may require specialized staffs.

Basic knowledge is good for researchers and consumers.

Tailoring evaluations

Every evaluation must be tailored to the circumstances of the program to yield credible and useful answers to specific questions while still allowing practical implementation.

Tailoring evaluations

Influences on evaluation plans include the purpose of the evaluation.– Provide feedback for program improvement to

program managers and sponsors.– Establish accountability to decision-makers

with responsibility to ensure that the program is effective, or contribute to knowledge about some form of social intervention

– Contribute to knowledge about some form of social intervention.

Tailoring evaluations

Influences also include the nature of program structure and circumstances.

The program must be responsive to:– How new or open to change the program is.– The degree of consensus or conflict among stakeholders

about the nature and mission of the program.– The values and concepts inherent in the program

rationale and design.– The way in which the program is organized and

administered.

Tailoring evaluations Evaluation planning must also accommodate

limitations on resources. Resources include:

– Funding;– Time for completion;– Pertinent technical expertise;– Program and stakeholder cooperation;– Access to important records and program material.

Balance between what is desirable and what is feasible.

Tailoring evaluations

The evaluation design can be structured around three issues.– The questions the evaluation is to answer;– The methods and procedures to be used to

answer these questions;– The nature of the evaluator-stakeholder

interactions during the course of the evaluation.

Tailoring evaluations

Deciding on the appropriate relationship between the evaluator and the evaluation sponsor, as well as other major stakeholders, is an often neglected, but critical aspect of an evaluation plan.– Independent is often expected– But participatory or collaborative may enhance

stakeholders skills or political influence.

Tailoring evaluations Evaluation questions and methods fall into five categories:

– Need for services;– Program conceptualization and design;– Program implementation;– Program outcomes; and

– Program efficiency. Evaluation terms corresponding to these categories include

needs assessment, process evaluation, and impact assessment.

Much of evaluation planning consists of identifying the evaluation approach corresponding to the type of questions to be answered and tailoring specifics to the program situation.

Identifying issues and formulating questions A critical phase in evaluation planning is the

identification and formulation of the questions that the evaluation will address.

These questions focus the evaluation on the areas of program performance most at issue for key stakeholders and guide the design so that it will provide meaningful information about program performance.

Good evaluation questions must identify clear, observable dimensions of program performance that are relevant to the program’s goals and represent domains in which the program can realistically be expected to have accomplishments.

Identifying issues and formulating questions What most distinguishes evaluation questions,

however, is that they involve criteria by which the identified dimensions of program performance can be judged.

If the formulation of the evaluation questions can include performance standards on which key stakeholders agree, evaluation planning will be easier and the potential for disagreement with the results reduced.

Identifying issues and formulating questions To ensure that matters of greatest

significance are covered in the evaluation design, the evaluation questions are best formulated through interaction and negotiation with the evaluation sponsors and other stakeholders representative of significant groups or distinctly positioned in relation to program decision-making.

Identifying issues and formulating questions Although stakeholder input is critical, the

evaluator must be prepared to identify program issues that warrant inquiry.

Evaluator should conduct a somewhat independent analysis of the assumptions and expectations on which the program is based.

Identifying issues and formulating questions Make the program theory explicit. Program theory describes the assumptions

inherent in a program.– Encompasses impact theory, which links

program actions to intended outcomes; and– Process theory, which describes a program’s

organizational plan and scheme for ensuring utilization of its services by the target population.

Identifying issues and formulating questions When these procedures have generated a full set of

evaluation questions, evaluator must organize them into related clusters.– Draw on stakeholder input and professional judgment to set

priorities. With the priority evaluation questions determined,

evaluator is ready to design the part of the evaluation devoted to answering them.

Program Theory– Program’s Organizational Plan– Service Utilization Plan– Impact Theory

Meeting the Need for Evaluation

Three basic questions– Can the results of the evaluation influence

decisions about the program?– Can the evaluation be done in time to be

useful?– Is the program significant enough to merit

evaluation?

Choices Facing Evaluators

Evaluation design– What are the evaluation questions?– What comparisons are needed?– What measurements are needed?– How will the resulting information be used?– What “breakouts” (disaggregations of data) are

needed, such as by facility or type of client?

Choices Facing Evaluators

Data Collection– What are the primary data sources?– How should data be collected?– Is sampling required? Where and how?– How large a sample is needed?– How will data quality be ensured?

Choices Facing Evaluators Data Analysis

– What analytical techniques are available (given the data)?

– Which analytical tools will be most appropriate?– In what format will the data be most useful?

Getting Evaluation Information Used– How should evaluation findings be packaged for

different audiences?– Should specific recommendations accompany evaluation

reports to encourage action?– What mechanisms can be used to check on

implementation of recommendations?

Evaluability Assessment Problems confronting evaluation.

– Evaluators and users fail to agree on goals, objectives, side effects, and performance criteria to be used in evaluating the program.

– Program goals and objectives are found to be unrealistic given the resources that have been committed to them and the program activities underway.

– Relevant information on program performance is often not available.

– Administrators on the policy or operating level are unable or unwilling to change the program on the basis of evaluation information.

Evaluability Assessment

Program goals, objectives, important side effects, and priority information needs are well-defined.

Program goals and objectives are plausible. Relevant performance data can be obtained. The intended users of the evaluation results

have agreed on how they will use the information.

Key steps in evaluability assessment Involve the intended users of evaluation information. Clarify the intended program from the perspectives of

policy-makers, managers, staff and other key stakeholders.

Explore program reality, including the plausibility and measurability of program goals and objectives.

Reach agreement on needed changes in program activities and objectives.

Explore alternative evaluation designs. Agree on evaluation priorities and intended uses of

information on program performance.