Chapter 11 Chapter 11 Evaluation and Policy Research.

52
Chapter 11 Chapter 11 Evaluation and Policy Research

Transcript of Chapter 11 Chapter 11 Evaluation and Policy Research.

Page 1: Chapter 11 Chapter 11 Evaluation and Policy Research.

Chapter 11Chapter 11Evaluation and Policy

Research

Page 2: Chapter 11 Chapter 11 Evaluation and Policy Research.

Evaluation Research

Evaluation research is not a method of data collection, like survey research of experiments, nor is it a unique component of research designs, like sampling or measurement.

Instead, evaluation research is social research that is conducted for a distinctive purpose: to investigate social programs (e.g., substance abuse treatment programs, welfare programs, criminal justice programs, or employment and training programs).

Page 3: Chapter 11 Chapter 11 Evaluation and Policy Research.

Evaluation Research may use one or more of the methods (Experiment, Survey, Observation, Intensive Interview, Focus Groups etc.) to analyze data

Page 4: Chapter 11 Chapter 11 Evaluation and Policy Research.

Evaluation Research, cont.

For each project, an evaluation researcher must select a research design and method of data collection that are useful for answering the particular research questions posed and appropriate for the particular program investigated.

The development of evaluation research as a major enterprise followed on the heels of the expansion of the federal government during the Great Depression and World War II.

Page 5: Chapter 11 Chapter 11 Evaluation and Policy Research.

Evaluation Research, cont.

Large Depression-era government outlays for social programs stimulated interest in monitoring program output, and the military effort in World War II led to some of the necessary review and contracting procedures for sponsoring evaluation research.

In the 1960s, criminal justice researchers began to use experiments to test the value of different policies (Orr 1999:24).

Page 6: Chapter 11 Chapter 11 Evaluation and Policy Research.

Evaluation Research, cont.

In the early 1980s, after this period of rapid growth, many evaluation research firms closed in tandem with the decline of many Great Society programs.

However, the demand for evaluation research continues, due, in part, to government requirements.

The growth of evaluation research is also reflected in the social science community. The American Evaluation Association was founded in 1986 as a professional organization for evaluation researchers (merging two previous associations) and the publisher of an evaluation research journal.

Page 7: Chapter 11 Chapter 11 Evaluation and Policy Research.

Evaluation Basics

First, clients, customers, students, or some other persons or units—cases—enter the program as inputs. (people functioning as raw materials to be processed.)

Resources and staff required by a program are also program inputs.

Page 8: Chapter 11 Chapter 11 Evaluation and Policy Research.

Evaluation Basics, cont.

Next some service or treatment is provided to the cases. This may be attendance in a class, assistance with a health problem, residence in new housing, or receipt of special cash benefits.

The program process may be simple or complicated, short or long, but it is designed to have some impact on the cases.

Page 9: Chapter 11 Chapter 11 Evaluation and Policy Research.

Evaluation Basics, cont.

The direct product of the program’s service delivery process is its output.

Program outputs may include clients served, case managers trained, food parcels delivered, or arrests made.

The program outputs may be desirable in themselves, but they primarily serve to indicate that the program is operating.

Page 10: Chapter 11 Chapter 11 Evaluation and Policy Research.

Evaluation Basics, cont.

Program outcomes indicate the impact of the program on the cases that have been processed.

Outcomes can range from improved test scores or higher rates of job retention to fewer criminal offenses and lower rates of poverty.

Any social program is likely to have multiple outcomes, some intended and some unintended, some positive and others that are viewed as negative.

Page 11: Chapter 11 Chapter 11 Evaluation and Policy Research.

Evaluation Basics, cont.

Variation in both outputs and outcomes, in turn, influence the inputs to the program through a feedback process.

If not enough clients are being served, recruitment of new clients may increase.

If too many negative side effects result from a trial medication, the trials may be limited or terminated.

If a program does not appear to lead to improved outcomes, clients may go elsewhere.

Page 12: Chapter 11 Chapter 11 Evaluation and Policy Research.

Evaluation Basics, cont.

Evaluation research is simply a systematic approach to feedback: It strengthens the feedback loop through credible analyses of program operations and outcomes.

Evaluation research also broadens this loop to include connections to parties outside of the program itself.

A funding agency or political authority may mandate the research, outside experts may be brought in to conduct the research, and the evaluation research findings may be released to the public, or at least funders, in a formal report.

Page 13: Chapter 11 Chapter 11 Evaluation and Policy Research.

Evaluation Basics, cont.

The evaluation process as a whole, and feedback in particular, can be understood only in relation to the interests and perspective of program stakeholders.

Stakeholders are those individuals and groups who have some basis of concern with the program.

They might be clients, staff, managers, funders, or the public.

Who the program stakeholders are and what role they play in the program evaluation will have tremendous consequences for the research.

Page 14: Chapter 11 Chapter 11 Evaluation and Policy Research.

Evaluation Basics, cont.

Unlike explanatory social science research, evaluation research is not designed to test the implications of a social theory; the basic issue often is: What is the program’s impact?

Process evaluation, for instance, often uses qualitative methods like traditional social science does, but unlike exploratory research, the goal is not to induce a broad theoretical explanation for what is discovered.

Page 15: Chapter 11 Chapter 11 Evaluation and Policy Research.

Evaluation Basics, cont.

Instead, the question is: How does the program do what it does?

Unlike social science research, the researchers cannot design evaluation studies simply in accord with the highest scientific standards and the most important research questions; instead, it is program stakeholders who set the agenda.

But there is no sharp boundary between the two. In their attempt to explain why the program has an

impact, and whether the program is needed, evaluation researchers often bring social theories into their projects, but for immediately practical aims.

Page 16: Chapter 11 Chapter 11 Evaluation and Policy Research.

Questions for Evaluation Research Evaluation projects can focus on several questions

related to the operation of social programs and the impact they have:

o Is the program needed?o Can the program be evaluated?o How does the program operate?o What is the program’s impact?o How efficient is the program?

Page 17: Chapter 11 Chapter 11 Evaluation and Policy Research.

Needs Assessment

Is a new program needed or an old one still required? Is there a need at all?

A needs assessment attempts to answer these questions with systematic, credible evidence.

Need may be assessed by social indicators such as the poverty rate or the level of home ownership, by interviews of such local experts as school board members or team captains, by surveys of populations in need, or by focus groups with community residents (Rossi & Freeman 1989).

Page 18: Chapter 11 Chapter 11 Evaluation and Policy Research.

Evaluability Assessment

Evaluation research will be pointless if the program itself cannot be evaluated.

Yes, some type of study is always possible, but a study specifically to identify the effects of a particular program may not be possible within the available time and resources.

So researchers may conduct an evaluability assessment to learn this in advance, rather than expend time and effort on a fruitless project.

Page 19: Chapter 11 Chapter 11 Evaluation and Policy Research.

Evaluability Assessment, cont.

Why might a social program not be evaluable? Management only wants to have its superior

performance confirmed and does not really care whether the program is having its intended effects. This is a very common problem.

Staff are so alienated from the agency that they don’t trust any attempt sponsored by management to check on their performance.

Page 20: Chapter 11 Chapter 11 Evaluation and Policy Research.

Evaluability Assessment, cont.

Program personnel are just “helping people” or “putting in time” without any clear sense of what the program is trying to achieve.

The program is not clearly distinct from other services delivered from the agency and so can’t be evaluated by itself (Patton 2002:164).

Page 21: Chapter 11 Chapter 11 Evaluation and Policy Research.

Evaluability Assessment, cont.

An evaluability assessment can help to solve the problems identified.

Discussion with program managers and staff can result in changes in program operations.

The evaluators may use the evaluability assessment to “sell” the evaluation to participants and sensitize them to the importance of clarifying their goals and objectives.

Knowledge about the program gleaned through the evaluability assessment can be used to refine evaluation plans.

Page 22: Chapter 11 Chapter 11 Evaluation and Policy Research.

Evaluability Assessment, cont.

Because they are preliminary studies to “check things out,” evaluability assessments often rely on qualitative methods.

Program managers and key staff may be interviewed in-depth, or program sponsors may be asked about the importance they attach to different goals.

These assessments also may have an “action research” aspect, because the researcher presents the findings to program managers and encourages changes in program operations.

Page 23: Chapter 11 Chapter 11 Evaluation and Policy Research.

Process Evaluation

What actually happens in a social program? Finding this out would be process analysis or

process evaluation—research to investigate the process of service delivery.

Process evaluation is even more important when more complex programs are evaluated. Many social programs comprise multiple elements and are delivered over an extended period of time, often by different providers in different areas.

Page 24: Chapter 11 Chapter 11 Evaluation and Policy Research.

Process Evaluation, cont.

The term formative evaluation may be used instead of process evaluation when the evaluation findings are used to help shape and refine the program

Formative evaluation procedures that are incorporated into the initial development of the service program can specify the treatment process and lead to changes in recruitment procedures, program delivery, or measurement tools

Page 25: Chapter 11 Chapter 11 Evaluation and Policy Research.

Process Evaluation, cont.

Process evaluation can employ a wide range of indicators.

Program coverage can be monitored through program records, participant surveys, community surveys, or utilizers versus dropouts and ineligibles.

Service delivery can be monitored through service records completed by program staff, a management information system maintained by program administrators, or reports by program recipients (Rossi & Freeman, 1989).

Page 26: Chapter 11 Chapter 11 Evaluation and Policy Research.

Process Evaluation, cont.

Qualitative methods are often a key component of process evaluation studies because they can be used to elucidate and understand internal program dynamics—even those that were not anticipated

Qualitative researchers may develop detailed descriptions of how program participants engage with each other, how the program experience varies for different people, and how the program changes and evolves over time.

Page 27: Chapter 11 Chapter 11 Evaluation and Policy Research.

Impact Analysis

The core questions of evaluation research are Did the program work? and Did it have the intended result?

This part of the research is variously called impact analysis, impact evaluation, or summative evaluation.

Formally speaking, impact analysis compares what happened after a program with what would have happened had there been no program.

Page 28: Chapter 11 Chapter 11 Evaluation and Policy Research.

Impact Analysis, cont.

Rigorous evaluations often lead to the conclusion that a program does not have the desired effect

Depending on political support for the program and its goals, the result may be efforts to redesign the program (as with D.A.R.E.) or reduction or termination of program funding.

Page 29: Chapter 11 Chapter 11 Evaluation and Policy Research.

Efficiency Analysis

Whatever the program’s benefits, are they sufficient to offset the program’s costs?

Are the taxpayers getting their money’s worth? What resources are required by the program? These efficiency questions can be the primary

reason that funders require evaluation of the programs they fund. As a result, efficiency analysis, which compares program effects to costs, is often a necessary component of an evaluation research project.

Page 30: Chapter 11 Chapter 11 Evaluation and Policy Research.

Efficiency analysis A type of evaluation research that compares program costs to program effects. It can be either a cost-benefit analysis or a cost-effectiveness analysis.

Page 31: Chapter 11 Chapter 11 Evaluation and Policy Research.

Cost-benefit analysis A type of evaluation research that compares program costs to the economic value of program benefits.

Page 32: Chapter 11 Chapter 11 Evaluation and Policy Research.

Efficiency Analysis, cont.

A cost-benefit analysis must identify the specific program costs and the procedures for estimating the economic value of specific program benefits.

This type of analysis also requires that the analyst identify whose perspective will be used in order to determine what can be considered a benefit rather than a cost.

Program clients will have a different perspective on these issues than do taxpayers or program staff.

Page 33: Chapter 11 Chapter 11 Evaluation and Policy Research.

Cost-effectiveness analysis A type of evaluation research that compares program costs to actual program outcomes.

Page 34: Chapter 11 Chapter 11 Evaluation and Policy Research.

Efficiency Analysis, cont.

A cost-effectiveness analysis focuses attention directly on the program’s outcomes rather than on the economic value of those outcomes.

In a cost-effectiveness analysis, the specific costs of the program are compared to the program’s outcomes, such as the number of jobs obtained, the extent of improvement in reading scores, or the degree of decline in crimes committed.

Page 35: Chapter 11 Chapter 11 Evaluation and Policy Research.

Design Decisions

Once we have decided on, or identified, the goal or focus for a program evaluation, there are still important decisions to be made about how to design the specific evaluation project.

Page 36: Chapter 11 Chapter 11 Evaluation and Policy Research.

Design Decisions, cont.

The most important decisions are the following:o —Do we care how the program gets results?o —Whose goals matter most? Researcher or

Stakeholdero —Which methods provide the best answers?o Qualitative, Quantitative or botho —How complicated should the findings be?

Page 37: Chapter 11 Chapter 11 Evaluation and Policy Research.

Quantitative or Qualitative Methods Evaluation research that attempts to identify the

effects of a social program typically is quantitative:

Did the response times of emergency personnel tend to decrease?

Did the students’ test scores increase? Did housing retention improve?

Page 38: Chapter 11 Chapter 11 Evaluation and Policy Research.

Quantitative or Qualitative Methods, cont. It’s fair to say that when there’s an interest in

comparing outcomes between an experimental and a control group, or tracking change over time in a systematic manner, quantitative methods are favored.

But qualitative methods can add much to quantitative evaluation research studies, including more depth, detail, nuance, and exemplary case studies

Perhaps the greatest contribution qualitative methods can make in many evaluation studies is investigating the program

Page 39: Chapter 11 Chapter 11 Evaluation and Policy Research.

Quantitative or Qualitative Methods, cont. Although it is possible to track service delivery with

quantitative measures like staff contact and frequency of complaints, finding out what is happening to clients and how clients experience the program can often best be accomplished by observing program activities and interviewing staff and clients intensively.

Another good reason for using qualitative methods in evaluation research is the importance of learning how different individuals react to the program.

Qualitative methods can also help reveal how social programs actually operate.

Page 40: Chapter 11 Chapter 11 Evaluation and Policy Research.

Simple or Complex Outcomes

Does the program have only one outcome? Unlikely.

The decision to focus on one outcome rather than another, on a single outcome or on several, can have enormous implications.

In spite of the additional difficulties introduced by measuring multiple outcomes, most evaluation researchers attempt to do so. The result usually is a much more realistic, and richer, understanding of program impact.

Page 41: Chapter 11 Chapter 11 Evaluation and Policy Research.

Ethics in Evaluation

Evaluation research can make a difference in people’s lives while the research is being conducted, as well as after the results are reported.

Job opportunities, welfare requirements, housing options, treatment for substance abuse, and training programs are each potentially important benefits, and an evaluation research project can change both the type and availability of such benefits.

This direct impact on research participants and, potentially, their families, heightens the attention that evaluation researchers have to give to human subjects’ concerns.

Page 42: Chapter 11 Chapter 11 Evaluation and Policy Research.

Ethics in Evaluation, cont.

It is when program impact is the focus that human subjects considerations multiply.

What about assigning persons randomly to receive some social program or benefit?

Page 43: Chapter 11 Chapter 11 Evaluation and Policy Research.

Ethics in Evaluation, cont.

One justification given by evaluation researchers has to do with the scarcity of these resources.

If not everyone in the population who is eligible for a program can receive it, due to resource limitations, what could be a fairer way to distribute the program benefits than through a lottery?

Random assignment also seems like a reasonable way to allocate potential program benefits when a new program is being tested with only some members of the target recipient population.

Page 44: Chapter 11 Chapter 11 Evaluation and Policy Research.

Ethics in Evaluation, cont.

However, when an ongoing entitlement program is being evaluated and experimental subjects would normally be eligible for program participation, it may not be ethical simply to bar some potential participants from the programs.

Instead, evaluation researchers may test alternative treatments or provide some alternative benefit while the treatment is being denied.

Page 45: Chapter 11 Chapter 11 Evaluation and Policy Research.

Ethics in Evaluation, cont.

There are many other ethical challenges in evaluation research:

How can confidentiality be preserved when the data are owned by a government agency or are subject to discovery in a legal proceeding?

Is it legitimate for research decisions to be shaped by political considerations?

Page 46: Chapter 11 Chapter 11 Evaluation and Policy Research.

Ethics in Evaluation, cont.

Must evaluation findings be shared with stakeholders rather than only with policy makers?

Will the results actually be used?

Page 47: Chapter 11 Chapter 11 Evaluation and Policy Research.

Ethics in Evaluation, cont.

The problem of maintaining subject confidentiality is particularly thorny, because researchers, in general, are not legally protected from the requirements that they provide evidence requested in legal proceedings, particularly through the process known as “discovery.”

However, it is important to be aware that several federal statutes have been passed specifically to protect research data about vulnerable populations from legal disclosure requirements.

Page 48: Chapter 11 Chapter 11 Evaluation and Policy Research.

Ethics in Evaluation, cont.

Ethical concerns must also be given special attention when evaluation research projects involve members of vulnerable populations as subjects.

In order to conduct research on children, parental consent usually is required before the child can be approached directly about the research.

Adding this requirement to an evaluation research project can dramatically reduce participation, because many parents simply do not bother to respond to mailed consent forms. (mentally disabled?)

Page 49: Chapter 11 Chapter 11 Evaluation and Policy Research.

Conclusions

Hopes for evaluation research are high: Society could benefit from the development of programs that work well, accomplish their policy goals, and that serve people who genuinely need them.

Page 50: Chapter 11 Chapter 11 Evaluation and Policy Research.

Conclusions, cont.

Because social programs and the people who use them are complex, evaluation research designs can easily miss important outcomes or aspects of the program process.

Because the many program stakeholders all have an interest in particular results from the evaluation, researchers can be subject to an unusual level of cross-pressures and demands.

Because the need to include program stakeholders in research decisions may undermine adherence to scientific standards, research designs can be weakened.

Page 51: Chapter 11 Chapter 11 Evaluation and Policy Research.

Conclusions, cont.

Because some program administrators want to believe their programs really work well, researchers may be pressured to avoid null findings or, if they are not responsive, find their research report ignored. Plenty of well-done evaluation research studies wind up in a recycling bin, or hidden away in a file cabinet.

Because the primary audience for evaluation research reports are program administrators, politicians, or members of the public, evaluation findings may need to be overly simplified, distorting the findings.

Page 52: Chapter 11 Chapter 11 Evaluation and Policy Research.

Conclusions, cont.

The rewards of evaluation research are often worth the risks, however.

Evaluation research can provide social scientists with rare opportunities to study complex social process, with real consequences, and to contribute to the public good.

Although they may face unusual constraints on their research designs, most evaluation projects can result in high-quality analysis and publications in reputable social science journals.