Evaluation Seminar Unit #6 Prof. Christopher L. Howard.

15
Evaluation Seminar Unit #6 Prof. Christopher L. Howard

Transcript of Evaluation Seminar Unit #6 Prof. Christopher L. Howard.

Page 2: Evaluation Seminar Unit #6 Prof. Christopher L. Howard.

Evaluation Defined

• Evaluation is systematic determination of merit, worth, and significance of something or someone using criteria against a set of standards. Evaluation often is used to characterize and appraise subjects of interest in a wide range of human enterprises, including the arts, criminal justice, foundations and non-profit organizations, government, health care, and other human services.

• http://en.wikipedia.org/wiki/Evaluation

Page 3: Evaluation Seminar Unit #6 Prof. Christopher L. Howard.

Formulating a Plan• A key question any grantor will ask of an organization applying for funding will be, "How will you know your idea

worked?" • Evaluating what worked and what didn't will be crucial for your funding and for your project. What impact do you

expect to achieve and how will you evaluate it? • Here are some tips to help you develop that crucial evaluation section of your grant proposal. • Decide if you are going to do an internal evaluation with your own staff, or if you want to hire outside expertise to

conduct your evaluation. Foundations often allow nonprofits to designate 5-10% of the total project budget for evaluation.

• Before you design your evaluation, consider the reasons to do an evaluation. Carlson and O'Neal-McElrath, authors of Winning Grants, Step by Step, suggest that evaluations can accomplish these six purposes:

– To find out if the hypothesis was right. Did your actually do what you set out to do? – To determine if the methods specified were used, and if the objectives were met. – To find out if an impact was made on the identified need. – To obtain feedback from the people served and other members of the community. – To maintain control over the project (evaluations are done at various points in the project). – To make changes in the program mid-stream, if necessary, to insure the program's success.

• Determine if you will use quantitative or qualitative methods for your data collection, or what combination of the two types you will use. Develop a good description of these methods and their rationale for the grantor.

• Make sure the evaluation component of your proposal connects with the proposal's objectives and methods. If those objectives and methods are measurable and time-specific, the evaluation will be easier to design.

• Ask yourself these questions as you develop the evaluation section of your proposal: – What is the evaluation's purpose? – How will you use the findings? – What will you know after the evaluation that you didn't know before? – What will you do as a result of the evaluation that you couldn't do before because you lacked the relevant information? – How will the clients and community served be better as a result of the program? – http://nonprofit.about.com/od/foundationfundinggrants/a/proposalevaluat.htm

Page 4: Evaluation Seminar Unit #6 Prof. Christopher L. Howard.

Measuring Success• Measuring the Success of the Ridge, Kids and Stewards Program• Currently, program facilitators administer both a pretest and a posttest to

youth participants in order to measure what information is learned by the students during the six-week program. At the conclusion of each session, we also ask participating teachers to complete a detailed evaluation questionnaire so we can continue to find ways to improve an already excellent program.

• The Ridge, Kids and Stewards (RDK) program is also regularly evaluated by an outside panel of professional evaluators. Because it is our goal to teach young people to become stewards of the environment, the RKS program coordinator and others are working to develop a more sophisticated, yet practical, evaluation process in order to measure the long-term impact of the program on youth who participate.

• http://nonprofit.about.com/od/foundationfundinggrants/a/proposalevaluat_2.htm

Page 5: Evaluation Seminar Unit #6 Prof. Christopher L. Howard.

Grant Evaluation Checklist

• Grants Evaluation Checklist• This checklist outlines many of the criteria that are used by state boards, outside reviewers, Commission staff, and Commission members

in evaluating proposals. It is included here as an aid to applicants in addressing key issues in developing proposals. While some of these criteria apply primarily to traditional archival projects relating to collections of records, others are more generally applicable to the full range of projects supported by the Commission.

• Does the APPLICANT INSTITUTION have – Adequate space to house the records it might acquire/process/preserve through this grant? – Proper environmental conditions and controls, with particular regard to humidity, temperature, air purity, and security? – Adequate staff and facilities to handle researcher requests for use of its holdings? – A stable, dependable, and sufficient financial base for essential program activities? – Properly trained, appropriate staff? – Defined, written policies and procedures on acquisitions, processing, and researcher access to and use of materials? – Finding aids? If so, what are they (registers, inventories, checklists, guides, catalog cards, etc.)? – If the applicant is lacking in one or more of these areas, does the proposal indicate how this will be rectified during the grant period or later?

• With regard to the PROPOSAL IN GENERAL – Are the records to be dealt with significant for historical research? – Are the goals, objectives, and primary tasks set forth feasible? – Are the project goals stated clearly? Are they concrete and specific enough to be measurable? – Is the project designed in such a way that maximum impact on the work of the project and on the overall development of the applicant's

program is obtained by use of NHPRC grant funds? – Is there clear commitment from the applicant institution to assume responsibility for the support of activities of a continuing nature once the

grant period ends? – Is the project placed in the context of the applicant's overall program? – Does the project relate to priorities and objectives established by the Commission and/or by the State Historical Records Advisory Board? – Are there clear plans to publicize the grant and its accomplishments and to undertake outreach to user communities and other groups likely to

be impacted by the project? – http://www.archives.gov/nhprc/apply/evaluation-checklist.html

Page 6: Evaluation Seminar Unit #6 Prof. Christopher L. Howard.

Questions to Ask• With regard to the PLAN OF WORK, does the proposal

– Include a description of the work to be performed by each person on the project? – Tie work to be done to a schedule or timetable? – Provide sufficient time, according to the timetable, for the accomplishment of project goals? Is too much time provided? – Indicate when related personnel activities, such as consultant visits and advisory board meetings, will take place and how they fit into the ongoing work of project

staff? – Show evidence of previous experience with the techniques to be used or their successful use by others? Are generally accepted standards being followed? – Indicate, for activities which are large in scope or new to the institution, that a test or pilot to identify problem areas or assess the validity of project goals and

approaches has occurred? – Include samples of any forms, cover letters, instructions, finding aid formats, etc., that are to be used or created during the project? – Show that project goals are supported by other parties whose cooperation is necessary for ultimate success? – Indicate awareness of other similar projects elsewhere, and the factors contributing to their success or failure? – Include a description of any follow-up or continuing activity that will (or should) occur after project completion? – Make sense? Is there a more logical or efficient manner of proceeding toward the accomplishment of stated project goals?

• With regard to the PERSONNEL, does the proposal – Note the names, qualifications, and duties of all known personnel involved in a substantive way? – Use personnel whose background and qualifications are appropriate for the duties assigned to them? – Include a job description and statement of qualifications for all positions to be filled? – Note how the search for qualified candidates will take place and provide for a sufficiently wide and careful search to obtain the strongest possible candidates? – Note the names and qualifications of any consultants, advisory board members, or other paid or non-paid advisors to the project?

• With regard to the BUDGET, does the proposal – Indicate what costs are to be paid for with grant monies and what costs will be met by the applicant or other institutions? – Explain how budget figures were arrived at (e.g., breakdown of travel costs, or daily rates charged by consultants)? – Account for all expenditures suggested by the proposal narrative? – Include a separate budget form for each year of a project lasting 18 months or longer, as well as a grand total at the end of the budget form used for the final

year? – Include appropriate cost-sharing or matching funds? – Reflect efforts to achieve maximum economy in achieving the project's goals? – Make sense? Do the figures add up?

• With regard to any PRODUCTS emanating from the project, does the proposal include – Descriptions of the format, content, and availability of any finding aids or databases to be produced? – Descriptions of the format and content of, and distribution plans for, any publication to be produced as well as justification for publication? – Description of the methods to be used in the preparation and microfilming/digitization of any records? Is there adherence to the Commission's guidelines and

suggestions in these areas? – Evidence of careful advance consideration and decision-making as to the purpose, audience, scope, and content of any intended product?– http://www.archives.gov/nhprc/apply/evaluation-checklist.html

Page 7: Evaluation Seminar Unit #6 Prof. Christopher L. Howard.

Why evaluate???• Why Do Evaluation?• For many grantwriters, evaluation competes with finances as the least favorite part of developing proposals. It seems terribly complex, it can feel like it’s forced on you by funders, and you may

experience resistance from the program people whose cooperation you need to develop a good proposal. The evaluation section is treated as a necessary evil and an afterthought — something you throw together after the real work of your proposal is done. As a result, it is often one of the weakest parts of many people’s grants. In a competitive funding round, a strong evaluation section may make the difference between getting funded or turned down.

• Yet we have doubts. Most of us aren’t trained researchers and approach the subject with some skepticism. We know it’s going to take time and energy, and it’s not clear what it gets us. Let’s look at the pros and (perceived) cons of doing evaluation.

• The primary reason for doing evaluation is that it gives you reliable information to improve your program and services to your clients. Evaluation can provide data on whether a program works and why, which parts of it are effective and which need improvement, whether it is the best use of your organization’s scarce resources.

• The primary reason for including evaluation in grant proposals is that funders require it. Proposals with good evaluation sections score better and are more fundable than ones with poor (or no) evaluation. Being able to cite a positive prior evaluation of your existing program increases your chances of receiving future or continuation funding.

• Notice that these are not the same. Fortunately, the stick of funder requirements and the carrot of program improvement both lead to the same place. The job of the grant professional is to take advantage of the requirement and use it to improve your grant proposals and therefore your programs and possibly your organization itself.

• Beyond these two fundamental reasons for doing program evaluation, there are several more advantages to your organization both during grant development and during program implementation.

• In grant proposals:• Evaluation provides a framework for improving both your grant proposal and the project you’re seeking funding for, by providing measures to make your goals and objectives more meaningful. •

• Evaluation provides a way to involve key stakeholders and direct service staff in program planning, increasing buy-in in your program. A positive evaluation can help your organization attract new staff, volunteers, funders and collaborators.

• Doing your own evaluation reduces the chances of funders or other outsiders undertaking their own evaluation of your program, which might be less informed and more detrimental to your organization.

• A strong scientifically based evaluation can turn your innovative project into a research study, and could lead to it being named an “evidence based practice”. •

• In program operations (after you get funded):• Evaluation gives immediate feedback, allowing you to identify and fix problems in existing programs while you still have grant funding to implement the changes. •

• Evaluation gives a mechanism/vehicle and techniques for getting feedback from your client/participants about their perceptions of your organization and project, and a way of letting them know how the project is working. An evaluation showing that your program works can motivate your existing clients and attract new ones.

• Evaluation provides valuable information for your organization to use for longer term strategic planning and program improvements. •

• Evaluation provides solid data for disseminating information about your program, and for others who may want to replicate it elsewhere. • http://www.grantsnorthwest.com/why-do-evaluation/•

Page 8: Evaluation Seminar Unit #6 Prof. Christopher L. Howard.

Resistance• Resistance to Evaluation• Despite these advantages many nonprofit organizations and their staffs are resistant to, or suspicious of, evaluation for a variety of reasons. The grantwriter must often

respond to these concerns, and even advocate for evaluation, in order to be able to develop a successful proposal. Here are some common objections to evaluation and responses you can use in overcoming to them. If these are your own beliefs, you’ll need to think them through before trying to convince others.

• Objection # 1) Evaluation is a way to judge us, to label our program a success or failure, and perhaps punish us (i.e. cut our funding).• Assumptions:• Evaluators are looking for a perfect program and if we don’t have it we’ll get marked down. • Many people’s only experience with evaluation has been employment performance reviews and college board tests. They are reluctant to be judged and ranked. • If we can keep our program results fuzzy, “they” won’t notice and we can keep doing what we’re doing. • Response:• By looking at what works and doesn’t work in a program, evaluation can be used to improve your services, not to punish your organization. Evaluation also helps you learn

“for whom, where and under what circumstances did it work?” • Funders, management and governing boards are looking for results. The lack of good program evaluation is worse than finding things that need improvement. •

• Objection # 2) Evaluation is forced on us by the funders for their own purposes (like punishing us, see # 1.)• Assumptions.• Evaluation is something we wouldn’t want to do funders didn’t make us. It has no intrinsic value to our organization. • Response:• If you design your own evaluation you can have control over it, rather than ceding the control to others (like funders). • If you do a good evaluation, it will help your organization improve its program, improve services to your clients, meet the funder’s needs and help you attract future

funding. •

• Objection # 3) Money spent on evaluation is diverted from direct client services where it could be helping people.• Assumptions:• Evaluation is expensive. • Evaluation is always competing for the same funds as services. • Evaluation doesn’t help our clients. • Response:• Evaluation doesn’t have to be expensive, and sometimes has separate funding. • A positive evaluation can help you raise more funds in the future. • Evaluation can improve services and outcomes for your clients, making direct service dollars more effective. • http://www.grantsnorthwest.com/why-do-evaluation/

Page 9: Evaluation Seminar Unit #6 Prof. Christopher L. Howard.

Objections Raised

• Objection # 4) We’re “people people”, not “science people”, and we don’t understand or like statistics.• Assumptions:• Evaluation is all abstract, dry statistical stuff, not relevant to the actual work we do. • The outside evaluator will look down on us or manipulate the data to make us look bad. • This is complicated and we can’t learn it. • Response:• While evaluation was once heavily statistical, program evaluation is increasingly moving to qualitative measurement which looks at people’s actual experience and the

impacts of programs on their lives. • Getting clarity about the uses of data and intentions for using findings in advance can help both the evaluator and program staff do a better job. • A good evaluation process is designed to include program staff, and a good evaluation report is written for lay people to understand. •

Objection # 5) Evaluation will take staff time away from providing direct services.• Assumptions:• The program staff only want to do their regular jobs and aren’t interested in improving their services. • The best use of staff time is working directly with clients, even if the work is less effective than it could be. • Response:• Evaluation data collection will take some of your program staff time — how much depends on the evaluation design. • Staff time spent on a good evaluation will improve their worklives and the lives of their clients.

• Objection # 6) This is different than the way we’ve always done things and we don’t understand or like it.• Assumptions:• If we change, it may be uncomfortable. • The way we do things is the best it could be, it doesn’t need improvement. • Response:• This feeling may be the underlying basis for some of the first 5 issues. • Doing evaluation may lead to change and may be uncomfortable, but it’s worth it if the resulting improvements are real. • With new information, nonprofits are discovering better ways of operating. • If you embrace evaluation and incorporate it into your grant development process, it will strengthen your whole proposal. You’ll develop more compelling needs

statements, create stronger goals and objectives, and write a better narratives. Your proposals will be more fundable and once funded, the grant programs will be more successful.

• In fact, your whole organization will benefit, because evaluation is becoming more important not just in grant proposals but to all aspects of operation. The organizations that survive and thrive will be the ones that understand, measure and work to improve themselves and their programs. And the way that’s done is through program evaluation.

• http://www.grantsnorthwest.com/why-do-evaluation/

Page 10: Evaluation Seminar Unit #6 Prof. Christopher L. Howard.

Formative Evaluation• Formative evaluation is a type of evaluation which has the purpose of improving programs. It goes under other names such as

developmental evaluation and implementation evaluation. It can be contrasted with other types of evaluation which have other purposes, in particular process evaluation and outcome evaluation. An example of this is its use in instructional design to assess ongoing projects during their construction to implement improvements. Formative evaluation can use any of the techniques which are used in other types of evaluation: surveys, interviews, data collection and experiments (where these are used to examine the outcomes of pilot projects).

• Formative evaluation developed relatively late in the course of evaluation's emergence as a discipline as a result of growing frustration with an exclusive emphasis on outcome evaluation as the only purpose for evaluation activity. Outcome evaluation looks at the intended or unintended positive or negative consequences of a program, policy or organization. While outcome evaluation is useful where it can be done, it is not always the best type of evaluation to undertake. For instance, in many cases it is difficult or even impossible to undertake an outcome evaluation because of either feasibility or cost. In other cases, even where outcome evaluation is feasible and affordable, it may be a number of years before the results of an outcome evaluation become available. As a consequence, attention has turned to using evaluation techniques to maximize the chances that a program will be successful instead of waiting till the final results of a program are available to assess its usefulness. Formative evaluation therefore complements outcome evaluation rather than being an alternative to it.

• Formative evaluation is done with a small group of people to "test run" various aspects of instructional materials. For example, you might ask a friend to look over your web pages to see if they are graphically pleasing, if there are errors you've missed, if it has navigational problems. It's like having someone look over your shoulder during the development phase to help you catch things that you miss, but a fresh set of eyes might not. At times, you might need to have this help from a target audience. For example, if you're designing learning materials for third graders, you should have a third grader as part of your Formative Evaluation.

• The terms formative and summative evaluation were coined by Michael Scriven (1967) [1].• Formative Evaluation has also recently become the recommended method of evaluation in U.S. education. In this context, an educator

would analyze the performance of a student during the teaching/intervention process and compare this data to the baseline data. There are four visual criteria that can be applied [2]

• Change in mean, • Change in level or discontinuity of performance, • Change in trend or rate of change, • Latency of change[3] • Another method of monitoring progress in formative evaluation is use of the number-point rule. In this method, if a certain pre-specified

number of data points collected during the intervention are above the goal, then the educators need to consider raising the goal or discontinuing the intervention. If data points vary highly, educators can discuss how to motivate a student to achieve more consistently[3].

• http://en.wikipedia.org/wiki/Formative_evaluation

Page 11: Evaluation Seminar Unit #6 Prof. Christopher L. Howard.

Formative Continued• Formative Evaluation is a bit more complex than summative evaluation. It is done with a small group of people to "test run" various

aspects of instructional materials. For example, you might ask a friend to look over your web pages to see if they are graphically pleasing, if there are errors you've missed, if it has navigational problems. It's like having someone look over your shoulder during the development phase to help you catch things that you miss, but a fresh set of eye might not. At times, you might need to have this help from a target audience. For example, if you're designing learning materials for third graders, you should have a third grader as part of your Formative Evaluation.

• Here are some different author's definitions of Formative Evaluation that will help you understand the difference. • Scriven, (1991)• Formative evaluation is typically conducted during the development or improvement of a program or product (or person, and so on) and it

is conducted,often more than once, for in-house staff of the program with the intent to improve. The reports normally remain in-house; but serious formative evaluation may be done by an internal or an external evaluator or preferably, a combination; of course, many program staff are, in an informal sense, constantly doing formative evaluation.

• Weston, Mc Alpine, and Bordonaro, (1995)• The purpose of formative evaluation is to validate or ensure that the goals of the instruction are being achieved and to improve the

instruction, if necessary, by means of identification and subsequent remediation of problematic aspects. • Worthen, Sanders, and Fitzpatrick, (1997)• Formative evaluation is conducted to provide program staff evaluative information useful in improving the program. • Robert Stakes• "When the cook tastes the soup, that’s formative; when the guests taste the soup, that’s summative." • Scriven, (1996)• "is research-oriented vs. action-oriented" • "evaluations are intended - by the evaluator - as a basis for improvement" • "the summative vs. formative distinction is context dependent" • http://janus.ucc.nau.edu/edtech/etc667/proposal/evaluation/summative_vs._formative.htm

Page 12: Evaluation Seminar Unit #6 Prof. Christopher L. Howard.

Summative Evaluation

• Summative evaluation provides information on the product's efficacy ( it's ability to do what it was designed to do). For example, did the learners learn what they were supposed to learn after using the instructional module. In a sense, it lets the learner know "how they did," but more importantly, by looking at how the learner's did, it helps you know whether the product teaches what it is supposed to teach.

• Summative evaluation is typically quantitative, using numeric scores or letter grades to assess learner achievement.

• So what is the difference between a Summative Evaluation and Learner Assessment? • Although both might look at the same data, a Learner Assessment generally looks at how an

individual learner performed on a learning task. It assesses a student's learning -- hence the name Learner Assessment. For example, you might assess an entire class of students, but you are assess them individually to see how each did.

• A Summative Evaluation, on the other hand, looks at more than one learner's performance to see how well a group did on a learning task that utilized specific learning materials and methods. By looking at the group, the instructional designer can evaluate the learning materials and learning process -- hence the name Summative Evaluation. For example, here you may find that, as a group, all of the students did well on Section A of some instructional materials, but didn't do so well on Section B. That would indicate that the designer should go back and look at the design or delivery of Section B.

• http://janus.ucc.nau.edu/edtech/etc667/proposal/evaluation/summative_vs._formative.htm

Page 13: Evaluation Seminar Unit #6 Prof. Christopher L. Howard.

Types of Evaluations• Certain types of grant proposals require an evaluation plan. When an evaluation plan is required, it is usually

made clear in the program announcement. Generally speaking, smaller, single investigator grant proposals do not require one, while large, more involved proposals do. If there is question about this, the OVPR can assist the researcher in determining whether an evaluation plan is needed. The staff of OVPR can help researchers write simple evaluation plans for small projects. However, proposals for large center grants require contracting a professional external evaluator.

• An evaluation plan is an integral part of a grant proposal, and should be considered equally with other aspects of the grant. Evaluation provides information that will help to improve the project; it should be used during the development stage and throughout the life of the project, as well as when the project is complete. Evaluation provides different kinds of information to those who are directly involved with the project (participants) and to those who are otherwise invested in the project, whether by credibility, control, or other capital (stakeholders). Participants and stakeholders should be clearly identified in the evaluation plan. A properly presented evaluation plan can be used to document the value of a project. The types, or stages, of evaluation are explained below.

Formative: Formative evaluation assesses initial and ongoing project activities. It begins during project development and continues through implementation. It is an assessment tool that can provide new and sometimes unanticipated insights or information that can then be used to improve the outcome of the project.

Summative: The purpose of a summative evaluation is to assess the quality and success of the project in reaching its stated goals. It presents the information collected regarding the activities and outcomes of the project. Although it addresses some of the same questions as the formative evaluation, it takes place after the completion of the project. The evaluation process can be broken down into a series of steps, from preparing the evaluation to carrying it out and interpreting it. A more detailed explanation can be found at the OVPR or online at the NSF (The 2002 User-Friendly Handbook for Project Evaluation).

• http://research.brown.edu/ovpr/evaluationplan.php

Page 14: Evaluation Seminar Unit #6 Prof. Christopher L. Howard.

Evaluation Steps• 1)  The participants should develop a conceptual model of their project and identify key evaluation points. This is to ensure that all

participants and stakeholders have a common understanding about the project's structure and expected outcomes, and will help them to focus on the most important elements of the project.

• 2)  From this starting point, participants can create evaluation questions and define measurable outcomes. The latter can be further divided into short term and long term outcomes, or the more immediate number of people affected by the project versus the overall changes that might not occur until after completion of the project.

• 3)  The next step is to develop an evaluation design that is best suited to the project. This means that the evaluation should be geared toward extracting and presenting the most useful information about the project’s objectives, as well as addressing its shortcomings. In developing the evaluation design, the researcher must determine who will be studied and when, and also select a methodological approach and data collection instruments. The NSF-sponsored Online Evaluation Resource Library is an excellent source, providing step-by-step instructions for developing an evaluation plan.

• 4)  At this point, data collection can begin. There are several methods for this: quantitative, qualitative, and mixed-method. • The quantitative method focuses on large numbers of respondents, and uses numerical measurement and data analysis based on

statistical methods. It is often thought to be more objective and accurate, and tends to be favored by proposal reviewers because it provides results that can be analyzed using sophisticated statistical techniques. This can be misleading, since a number of factors may negate these results (survey respondents might not understand the questions, surveyors may misinterpret the responses, etc.).

• Qualitative data collection opts for a less broad approach, with a smaller pool of respondents. It is more descriptive and interpretive, making use of observations, in-depth interviews, and focus groups. Qualitative data collection yields richer data that many stakeholders believe to be more informative and representative. However, it is more costly and time-consuming, and does not get as large a number of responses as quantitative data collection.

• Mixed method employs both of the types detailed above, and is perhaps the most balanced approach. In addition to the evaluation results being strengthened by the use of more than one data collection method, each type (quantitative and qualitative) brings unique information to those results.

• 5) Once data collection is complete for either a formative or summative evaluation, the data must be analyzed and provided to the interested audiences. The reviewers of a formative evaluation could be the principle investigator, the steering or governance committee, and either an internal or external evaluator (depending on the grant requirements). A formative evaluation plan should always clearly state how the evaluation will be used to improve the outcome of the project. The summative evaluation would involve all of the reviewers listed above, plus the program director for the funding agency. The staff of the OVPR can help prepare and review both types of evaluation plans

• http://research.brown.edu/ovpr/evaluationplan.php

Page 15: Evaluation Seminar Unit #6 Prof. Christopher L. Howard.

Wrapping It Up

• Evaluation defined• Formulating a plan• Measuring success• Checklist and questions to ask• Why evaluate?• Objections and Resistance• Formative v. Summative• Steps of evaluation