Dr. Tina Christie: Evaluation

28
FTLA Evaluation Presentation Christina (Tina) Christie Associate Professor & Division Head Social Research Methodology Graduate School of Education & Information Studies UCLA 2/24/12

Transcript of Dr. Tina Christie: Evaluation

Page 1: Dr. Tina Christie: Evaluation

FTLA Evaluation Presentation

Christina (Tina) ChristieAssociate Professor & Division Head

Social Research Methodology Graduate School of Education & Information Studies

UCLA2/24/12

Page 2: Dr. Tina Christie: Evaluation

Research vs. EvaluationEvaluation and Research have many similar characteristics; however, they are very different in the following ways:

Evaluation

• Intended for:– Program decision making– Rendering judgments

• Stakeholders set the agenda• Primary audience for the

study:– Program staff & stakeholders

• Findings are:– Program & context specific– Shared on an ongoing basis

Research

• Intended for:– Adding to the existing

knowledge base

• Researcher sets the agenda• Primary audience for the

study:– Scientific/academic

community

• Findings are:– Intended to be broadly

applicable or generalizable– Shared at the end of the

study

Page 3: Dr. Tina Christie: Evaluation

Evaluation Defined• Evaluation refers to the process of determining the merit,

worth, or value of something, or the product of that process. (Scriven, 1991, p. 139)

• Program evaluation is the use of social research methods to systematically investigate the effectiveness of social intervention programs. (Rossi, Lipsey, Freeman, 2004, p. 28)

• Program evaluation is the systematic collection of information about the activities, characteristics, and outcomes of programs to make judgments about the program, improve program effectiveness, and/or inform decisions about future programming. (Patton, 1997, p. 23)

3

Page 4: Dr. Tina Christie: Evaluation

What Can Evaluation Help Us Know

• Know-about problems: knowledge about health, wealth and social inequities

• Know what-works: policies, programs, strategies that bring about desired outcomes at acceptable costs and with relatively few unwanted consequences

• Know-how (to put into practice): effective program implementation

• Know-who (to involve): estimates of clients needs as well as information on key stakeholders necessary for potential solutions

• Know-why: knowledge about why an action is required e.g., the relationship between values and policy decisions

• Adapted from Ekblom (2002) cited in Nutley, Walter & Davies (2007)4

Page 5: Dr. Tina Christie: Evaluation

Why Evaluate?

5

• People (stakeholders) naturally make evaluative judgments about programs & policies, often based on limited information and are susceptible to biases

• Evaluators use a set of “tools” (research designs, methods) and “roadmaps” (evaluation theories) that offer stakeholders’ understanding of and action in relation to programs and policies

Page 6: Dr. Tina Christie: Evaluation

How Does One Evaluate?There are many ways to approach an evaluation of a

program or policyRoadmaps are also called “theories”

(but are really “models” or “approaches” or “frameworks” ) about how best to conduct an evaluation studyThere are many evaluation theories and there is no one

“right” theory or way of doing an evaluationAn evaluation theory offer evaluators a conceptual

framework that helps to organize the procedures Theories are distinguished by what is believed to be the

primary purpose of evaluation

Page 7: Dr. Tina Christie: Evaluation
Page 8: Dr. Tina Christie: Evaluation
Page 9: Dr. Tina Christie: Evaluation

Tree “Roots”

• accountability- an important motivation for evaluation, a way to improve programs and society

• systematic social inquiry—methodical and justifiable set of procedures for determining accountability

• epistemology- nature and validity (or limitations) of knowledge- the legitimacy of value claims, the nature of universal claims, and the view that truth (or fact) is what we make it to be

Page 10: Dr. Tina Christie: Evaluation

Methods

• more accurate to describe these approaches as emphasizing research methodology

• evaluation is primarily centered on research methodology- “knowledge construction”

• models are mostly derivations of the randomized control trail, and are intended to offer results that are generalizable

Page 11: Dr. Tina Christie: Evaluation

Valuing

• placing value on the subject of the evaluation, the evaluand, is essential to the process

• initially driven by the work of Michael Scriven (1967) and Robert Stake (1967, 1975), this work firmly establishes the vital role of valuing in evaluation

• split in two—objectivist and subjectivist—which distinguishes the two fundamental perspectives informing the valuing process

Page 12: Dr. Tina Christie: Evaluation

Use

• the pioneering work of Daniel Stufflebeam (initially with Egon Guba) and Joseph Wholey, originally focused on an orientation toward evaluation and decision making

• an explicit concern for the ways in which evaluation information will be used, and focuses specifically on those who will use the information.

Page 13: Dr. Tina Christie: Evaluation

Use as a Motivator for Evaluation

• The organizational contexts in which stakeholder’s use information about programs & policies vary, so the demands on and approaches to evaluation should vary accordingly.

13

Page 14: Dr. Tina Christie: Evaluation

14

What Does it Mean for an Evaluation to Have Impact?

(That is, for an Evaluation to be Useful or Used)

Page 15: Dr. Tina Christie: Evaluation

Taking a Look Back

• Early evaluation practice was grounded in a positivist search for effective solutions to social

• From this perspective, stringent application of research was used to produce evidence of a program’s success.

• Successful programs would then be replicated and transferred to other problems or contexts and those not proven successful would be terminated

• These evaluation experiments often proved difficult to sustain and rarely provided contextually valid data

• Even when positive results are not obtained, programs often continued

Page 16: Dr. Tina Christie: Evaluation

Taking a Look Back

• One possible reason for this is that even though programs may fail on average to produce positive outcomes across many contexts, there are some contexts in which these failed programs actually deliver value.

• The problem is that in other settings the programs do not deliver value.

• On average, in standard evaluations, the negative washes out the positive. The result is no overall effect.

Page 17: Dr. Tina Christie: Evaluation

What We Hope to Do Differently

• As an alternative to more traditional approaches, Cousins & Earl (1992, 1995) offer a use-focused participatory model of evaluation where practitioners are partners in the research process.

• With this approach, practitioners are uniquely positioned to offer an emic, or insider’s, view of practice (and in the case of this proposal, practice as both instructors and evaluators).

Page 18: Dr. Tina Christie: Evaluation

What We Hope to Do Differently

• Cochran-Smith and Lytle (1993) question the common assumption that knowledge about practice should be primarily outside-in (e.g., generated by university- or center-based researchers and then used in schools or programs).

• They point out that this outside-in assumption implies the unproblematic transmission of knowledge from source to destination and instead call attention to practitioners as knowers, experienced in the complex relationships of knowledge and practice as embedded in local contexts.

• Thus, practitioner-informed research can be uniquely revealing about the intersection of theory and practice.

Page 19: Dr. Tina Christie: Evaluation

How Can We Make an Impact?

• A common mistake is to assume that it is one component of a system, or one variable, that is causing the problem. Were this the case, innovations could be tested using traditional evaluation designs such as randomized controlled trials.

• But in fields such as healthcare or education, the effects of single variables are most often dwarfed by the complexity of the system in which they are embedded.

Page 20: Dr. Tina Christie: Evaluation

How Can We Make an Impact?

• So, it is necessary to understand the component processes that make up the system, and how they work together, in order to understand the roots of the problem and generate innovative solutions.

• Sometimes quality can be improved by merely tweaking the system, i.e., making small changes that enable the system to function in context the way it was designed to function. But other times the system must be re-designed from the ground up, or major components changed.

Page 21: Dr. Tina Christie: Evaluation

How Can We Make an Impact?

• A key point is: no single person will generally have the expertise required to improve a complex system.

• Englebart (2003) suggested that at heart of generative learning is the opportunity for different people who have shared aims, but work in different organizational contexts, to compare and contrast their results from common forms of inquiry.

Page 22: Dr. Tina Christie: Evaluation

How Can We Make an Impact?• A framework for improving systems that has been highly

successful in fields as diverse as the automotive industry and health care (Kenney, 2008; Rother, 2009)

• Clear shared goals; Sensitive measures to chart progress; Deep understanding of the problems/barriers that impede success; Sources of innovations, grounded in explicit theories of the problem; Mechanism for comparing/researching innovations, and systematically testing whether proposed changes are actually improvements.

• These five components bring focus to the group and highlight both the thinking that members must bring to bear on the problem and the assessment necessary to establish that a hypothesized change is, in fact, an improvement.

Page 23: Dr. Tina Christie: Evaluation

How Can We Make an Impact?

• While summative lagging indicators are important, evaluation research conducted with an eye toward improvement also needs data about specific program processes and student experiences as these occur in real time.

• This evidence is key for informing more micro-level activities linked to longer-term student success. For example, extant research suggests that the nature of students’ initial engagement with their community college during the first two or three weeks of enrollment is critical.

• Data about students’ academic behaviors and experiences during these critical weeks are key to understanding whether a pathway design is reducing early disengagement. Such data also may be used formatively to improve quick outreach efforts to students before they actually disengage (Bryk, Gomez & Grunow, 2010).

Page 24: Dr. Tina Christie: Evaluation

Plan-Do-Study-Act

• PDSA cycle: Plan, Do, Study, Act (Langley et. al., 2009). This methodology is an iterative process in which improvements are developed, tried, studied, tested against evidence, and then refined, over and over, until quality is improved and variability reduced to acceptable limits.

Page 25: Dr. Tina Christie: Evaluation

Peer Observation

• Formative peer observation assists in the improvement of teaching. Summative peer observation involves the evaluation of teaching effectiveness used for merit, promotion, and/or tenure decisions. Both formative and summative observations can be based on the same observation instruments.

Page 26: Dr. Tina Christie: Evaluation

Peer Observation

40% of colleges and universities now use peer classroom observation.• Observations offer insight regarding the improvement of

teaching.• Higher education settings are currently moving toward

multiple observation formats.• Strengths/ Advantages of Peer Observation• Gaining new ideas and perspectives about teaching from

colleague(s);• Both observer and observe may improve teaching ability;

Page 27: Dr. Tina Christie: Evaluation

Peer Observation

Strengths/ Advantages of Peer Observation• Gaining new ideas and perspectives about teaching from

colleague(s);• Both observer and observe may improve teaching ability;

Weaknesses/ Disadvantages of Peer Observation• Possible bias relating to the observer's own beliefs about

teaching;• Without a systematic approach—including observer training,

multiple visits, and use of reliable observation instruments—peer observation is not a valid method for summative evaluation.

Page 28: Dr. Tina Christie: Evaluation

Peer Observation

Guidelines• The observer should arrive at least 10 minutes before class.

"Walking into class late is poor practice and inconsiderate" (Seldin, 1999, p. 81).

• The observer can be briefly introduced to the students, with an equally brief explanation of why the observer is present. Then move on!

• Observers are not to ask questions or participate in activities during class; such behavior can detract from and invalidate the observations.

• An effective observation requires an observation instrument designed to accurately and reliably portray the teacher's behavior.