WHEN AND HOW TO USE EXTERNAL EVALUATORS...External evaluators are increasingly being used to help...

12
In March 2002 we presented this material at the national conference of Grantmakers for Effective Organizations (GEO), Capacity Building for Impact: The Future of Effectiveness for Nonprofits and Foundations. Our session, titled “When and How to Use External Evaluators,” was designed for program officers fairly new to evaluation who might be called upon to advise grantees about contracting an external evaluator or to directly commission evaluations on behalf of their foundations. We hoped to impart some of the skills and lessons we have learned through our own experiences with incorporating evaluation into a foundation setting. We also scanned the literature and sought the input of colleagues and consul- tants with whom we have worked. They generously shared their experiences and offered tips for selecting and hiring the right evaluators and ensuring a smooth consulting engagement. We are especially grateful to Barbara Kibbe, former Director of Organizational Effectiveness and Philanthropy at The David and Lucile Packard Foundation, as we borrowed significantly from materials that she has produced on this subject throughout the years. External evaluators are increasingly being used to help foundation staff set program goals, identify lessons, make midcourse adjustments, and measure program effectiveness. The success of their endeavors, however, relies in part upon how well the consulting engagement has been framed and whether the consultant is the right match for the project at hand. We hope that this material will serve as a resource to program officers to inform decisions about their allocation of resources and use of external consultants for program evaluations. This publication is generously supported by the Annie E. Casey Foundation through the Funders Evaluation Initiative, a three-year effort to develop, strengthen, and support the funding community’s capacity to conduct and use community-relevant program evaluation. Tracey A. Rutnik Marty Campbell Director, Funders Evaluation Initiative Director of Evaluation Association of Baltimore Area The James Irvine Foundation Grantmakers WHEN AND HOW TO USE EXTERNAL EVALUATORS Baltimore Are a Grantmakers of A membership organization dedicated to promoting philanthropy The Association © Association of Baltimore Area Grantmakers 2002. Sharing of this material is encouraged, and permission is hereby granted to duplicate and distribute this document for noncommercial, educational purposes. Association of Baltimore Area Grantmakers • 2 East Read Street, 8th Floor • Baltimore, MD 21202 (410) 727-1205 • www.abagmd.org Inside. . . Introduction and Background ... .................. Page 2 Selecting and Engaging an Evaluator .............. .................. Page 6 Evaluation Terminology ......... ................ Page 11 BY TRACEY A. RUTNIK AND MARTY CAMPBELL Partners in Building a Better California NOVEMBER 2002

Transcript of WHEN AND HOW TO USE EXTERNAL EVALUATORS...External evaluators are increasingly being used to help...

Page 1: WHEN AND HOW TO USE EXTERNAL EVALUATORS...External evaluators are increasingly being used to help foundation staff set program goals, identify lessons, make midcourse adjustments,

In March 2002 we presented this material at the national conference ofGrantmakers for Effective Organizations (GEO), Capacity Building for Impact:The Future of Effectiveness for Nonprofits and Foundations. Our session, titled“When and How to Use External Evaluators,” was designed for programofficers fairly new to evaluation who might be called upon to advise granteesabout contracting an external evaluator or to directly commission evaluationson behalf of their foundations. We hoped to impart some of the skills andlessons we have learned through our own experiences with incorporatingevaluation into a foundation setting.

We also scanned the literature and sought the input of colleagues and consul-tants with whom we have worked. They generously shared their experiencesand offered tips for selecting and hiring the right evaluators and ensuring asmooth consulting engagement. We are especially grateful to Barbara Kibbe,former Director of Organizational Effectiveness and Philanthropy at The Davidand Lucile Packard Foundation, as we borrowed significantly from materialsthat she has produced on this subject throughout the years.

External evaluators are increasingly being used to help foundation staff setprogram goals, identify lessons, make midcourse adjustments, and measureprogram effectiveness. The success of their endeavors, however, relies in partupon how well the consulting engagement has been framed and whether theconsultant is the right match for the project at hand. We hope that this materialwill serve as a resource to program officers to inform decisions about theirallocation of resources and use of external consultants for program evaluations.

This publication is generously supported by the Annie E. Casey Foundationthrough the Funders Evaluation Initiative, a three-year effort to develop,strengthen, and support the funding community’s capacity to conduct and usecommunity-relevant program evaluation.

Tracey A. Rutnik Marty CampbellDirector, Funders Evaluation Initiative Director of EvaluationAssociation of Baltimore Area The James Irvine FoundationGrantmakers

WHEN AND HOW TO USE

EXTERNAL EVALUATORS

Baltimore AreaGrantmakers

of

A m e m b e r s h i p o r g a n i z a t i o n d e d i c a t e d t o p r o m o t i n g p h i l a n t h r o p y

The Association

© Association of Baltimore Area Grantmakers 2002. Sharing of this material is encouraged, and permission is herebygranted to duplicate and distribute this document for noncommercial, educational purposes.

Association of Baltimore Area Grantmakers • 2 East Read Street, 8th Floor • Baltimore, MD 21202(410) 727-1205 • www.abagmd.org

Inside. . .

Introductionand Background ..................... Page 2

Selecting andEngaging anEvaluator ................................ Page 6

EvaluationTerminology ......................... Page 11

BY TRACEY A. RUTNIK AND MARTY CAMPBELL

Partners in Building a Better California

NOVEMBER 2002

Page 2: WHEN AND HOW TO USE EXTERNAL EVALUATORS...External evaluators are increasingly being used to help foundation staff set program goals, identify lessons, make midcourse adjustments,

Association of Baltimore Area GrantmakersPage 2

practice, evaluation was used by funders more as acontrol mechanism than as a learning tool, with theresult that service providers became more focusedon compliance than results and were rewarded fordoing the paperwork rather than making adifference in clients’ lives.

In response to this past, the concept of evaluationis shifting within the foundation field. Rather thansimply using evaluation to determine whether

grantees did what they said they woulddo, foundations are now usingevaluation in ways that are fullyintegrated with the foundation’s strategyand core planning processes.

Before embarking on any evaluation,you should address two fundamentalquestions:

1. What do you want to know?2. Who will use that information, and

how?

Identifying evaluation purposes andusers generates questions that will then

shape the evaluation methods. This sequence isimportant, because methods should not be chosenwithout purpose and audience in mind.

In terms of purposes, evaluations commissioned byfoundations typically are undertaken to serve oneor more of three general purposes:

1. Accountability: to measure the results ofprograms and account for use of foundationresources

2. Knowledge generation: to create newunderstanding about what works and what doesnot

3. Program planning or improvement: tosupport clear, well-designed, feasible, andmeasurable grantmaking programs and tosupport ongoing program planning,implementation, and overall organizationaleffectiveness

…foundations are

now using evaluation

in ways that are fully

integrated with the

foundation’s strategy

and core planning

processes.

INTRODUCTION

In the cycle of program planning by nonprofitorganizations, evaluation is usually the last item onthe agenda. There are many reasons for this. We allknow that evaluation has negative connotations. Itconjures up memories of the day we received ourSAT results. It is associated with grades and scores.We know what these experiences can be like:• They assume wrongdoing.• They are only quantitative.• They are mainly about control.• They produce anxiety about how we

measure up.

Many foundation staff experience one orseveral of the following fears aboutevaluation:1. I don’t speak the evaluation

language.2. I am not a numbers person.3. Evaluation will drive the focus of the

program.4. It will make my grantees anxious.5. It will make me look bad.6. It will distract from the real work.7. It will inhibit our entrepreneurial and innovative

nature.8. It means reading huge reports.

These fears are common and make us reluctant topursue evaluation. Recently, however, new trendsin thinking about and using evaluation have begunto facilitate positive responses and potentiallypowerful program effects.

AN EVOLVING DEFINITION

As a discipline, program evaluation grew out ofthe massive government programs of the 1960s and1970s. Policymakers and bureaucrats turned toevaluation as a way to judge whether a programwas successful and worth funding. Evaluation,however, did not provide the clear answers thatdecision makers desired. For their part, evaluatorscomplained that their findings were ignored. In

Page 3: WHEN AND HOW TO USE EXTERNAL EVALUATORS...External evaluators are increasingly being used to help foundation staff set program goals, identify lessons, make midcourse adjustments,

Association of Baltimore Area Grantmakers Page 3

In terms of audiences, evaluations commissionedby foundations tend to address the informationneeds of one or more of the following audiences:

1. Foundation: board, executive management, pro-gram staff, donors (for community foundations)

2. Grantee: board, executive management, pro-gram staff, donors, clients, community residents

3. Field: nonprofit practitioners in the same field,consultants, academics, policymakers

All of these purposes and audiences are legitimatebut they represent different ways to think about andorganize an evaluation. They differ in regard to thequestions that would be posed and the methods andevaluator competencies required to address thosequestions. The table below presents thesedifferences on several dimensions.

PriorityAudience

FoundationProgram Staff

Board ofDirectors

Grantees

PolicyCommunity

LocalCommunityMembers

Intended Use Key Questions

• Did the program reach the target population?• Was the program implemented as intended?• How can implementation be improved?• Did the program achieve its intended outcomes?• What are external barriers or areas of weakness/dysfunction and how

to intervene?• What sites or projects performed better than others?• How can the design of the foundation’s grants program be improved?• Is this strategy still relevant and/or viable?

Program Improve-ments (foundationgrants programand grantees)

Learning

Accountability

Accountability

ProgramImprovement

Learning

Accountability

Learning

Accountability

• Did the program achieve its intended outcomes?• How did the program contribute to the foundation’s strategic goals?• Is the program worth the cost?

• Was the program implemented as intended?• How can implementation be improved?• How can program efficiency be improved?• Is this strategy viable?• How can successes be effectively communicated to others to attract

resources?

• Whom is the program serving?• Is the program reaching its target population?• What do participants think about the program?• Did the program achieve its intended outcomes?• Is the program cost-effective?• Is this strategy viable?• In what contexts is it most effective?• How does the program fit with other community improvement

initiatives?

• Was the program implemented as intended?• What is the program accomplishing?• Is the program suited to our community needs?• How does this program compare to others in the community?

EVALUATION AUDIENCES AND USE

Page 4: WHEN AND HOW TO USE EXTERNAL EVALUATORS...External evaluators are increasingly being used to help foundation staff set program goals, identify lessons, make midcourse adjustments,

Association of Baltimore Area GrantmakersPage 4

RESPONSIBILITIES OF EVALUATORS

The purposes and audiences for an evaluationshould also determine the role of the evaluator. Forexample, if the focus is on program improvement,the evaluator may serve as a coach or technicalassistance provider. If the focus of the evaluationis on accountability, the evaluators must maintainindependence and a certain distance in order to beobjective and assess the value, merit, and worth ofthe program. Depending on the situation, anevaluator may assume several of the followingfunctions and roles:

FUNCTIONS OF EVALUATORS

• Training: helping grantees and foundation staffto think empirically, focus on specificity and clar-ity, and use evaluation data for planning and de-cision making

• Analyzing and Describing: collecting, examin-ing, and reporting findings

• Interpreting: What do the results mean? What istheir significance? Are the results positive, nega-tive, or good enough?

• Recommending: What are the implications of thefindings?

ROLES OF EVALUATORS

• Researcher: Collect and analyze data; report thefacts.

• Judge: Are results positive or negative? Whatis their value?

• Auditor: Ensure compliance with grant award.• Program Planner: Specify program model and

goals.• Coach: Assist in understanding how to monitor

progress and use results.• Technical Assistance Provider: Develop

management information system (MIS);improve program or organizational processes.

• Facilitator: Surface hidden agendas, supportreflection.

• Advocate for Social Justice: Give voice todifferent perspectives and value diverseexperiences.

EVALUATORS: WHO ARE THEY?

The evaluation field is very young and as yet thereis no certification process or degree in evaluation.This presents a challenge to program staff who maybe seeking to assess the qualifications of evaluationconsultants. Evaluators are trained in many fieldsand are based in many different types of institutions.Among the ranks are:• Academic researchers• Consultants from independent practices, for-

profit firms, management support organizations,or nonprofit centers

• Volunteers: graduate students, formerresearchers or consultants

• Sociologists, economists, psychologists,educators

FACTORS INFLUENCING EVALUATION

Foundation program staff continually wrestle withthe question of when it is appropriate to invest timeand resources in evaluation. Several factors needto be taken into consideration:• Stage of development of program• Complexity of program• Grantee experience with evaluation• Heterogeneity of sites/grantees/participants• Impetus for the evaluation• Timeframe for evaluation• Role of evaluator

COSTS

A common range for evaluation expenditures byfoundations is 5 to 10 percent of the total programcosts. The cost of an evaluation increases when youcannot rely on existing and publicly available data andmust pay for primary data collection. The cost alsoincreases in more complex programs, and with thenecessity of collecting data from multiple sources andin multiple formats. And finally, the cost increases witha higher level of scientific rigor attached to the datacollection effort. The following are some of the factorsthat influence the cost of an evaluation:

Page 5: WHEN AND HOW TO USE EXTERNAL EVALUATORS...External evaluators are increasingly being used to help foundation staff set program goals, identify lessons, make midcourse adjustments,

Association of Baltimore Area Grantmakers Page 5

COSTS VARY BY:• Complexity of program• Number of sites• Customization required• Labor required for data collection, analysis, and

reporting• Scientific rigor• Need for grantee capacity building

GETTING TO RESULTS:WHY DOES IT TAKE SO LONG?

There are several stages to any evaluation that mustbe worked through before a foundation can get to“program results.” The stages are listed below, alongwith the average timeframe required for each stage.

1. Clarification of program goals and identificationof measures of success (3 to 9 months)

2. Evaluation design and creation of data collectioninstruments (3 to 6 months)

3. Grantee training in evaluation and/or datacollection (1 to 2 years)

4. Data collection, analysis, and reporting of programimplementation (1 to 3 years)

5. Data collection, analysis, and reporting of programoutcomes (begin in year two of program)

6. Dissemination of program results and lessons(begin when some outcomes data are available,usually by year three of program)

A FEW CRITERIA FOR WHEN TO EVALUATE

Although many foundations are still developing andtesting evaluation approaches, lessons haveemerged, and the following are the most salient:

1. Programs and evaluation work best when an-chored in clear goal statements and programtheory. The ability of a foundation to learn fromany evaluation depends in large part on the de-gree to which goals and expected outcomes havebeen articulated and are clear, measurable, andfeasible.

2. The programs that foundations typically fundoperate in complex social environments. Often,grants are part of a larger set of factors and cir-cumstances that contribute to individual or com-munity changes. It is therefore very difficult fora foundation to use evaluation to “prove” thatthe results achieved by any program can be di-rectly attributed to the foundation. A more real-istic use of evaluation in most circumstances is

to measure program results and provide under-standing of how the desired results were achievedor why they were not achieved.

3. Programs must achieve a certain threshold ofmaturity and stability before outcomes can beuseful in drawing conclusions. Below that thresh-old, data is often not a reliable indicator of suc-cess or failure. Identifying intermediate steps ormilestones that must be reached is a useful pro-cess to permit periodic assessment of programprogress to ensure that you are on the right track.

4. Evaluation demands a significant amount of“ramp up” activity—identifying and hiring anexternal evaluator; specifying clear, measurablegoals; and developing an evaluation plan andmeans of measurement. Only after this time-in-tensive, preliminary work can the evaluation be-gin collecting information, interpreting the re-sults, and sharing and reflecting on those results.

IdentifyingMeasures of

Success

GranteeTraining

OutcomesAnalysis Dissemination

EvaluationDesign

ImplementationAnalysis

EVALUATION STAGES

Page 6: WHEN AND HOW TO USE EXTERNAL EVALUATORS...External evaluators are increasingly being used to help foundation staff set program goals, identify lessons, make midcourse adjustments,

Association of Baltimore Area GrantmakersPage 6

5. Grantees are more likely to use and benefit fromevaluation if they are ready to commit staff timeand resources and if they have ownership of theprocess and findings, which means they must beactively involved in the evaluation and theyshould not be in the midst of an organizationalcrisis or major transition. Evaluators must there-fore involve them by preparing substantialgroundwork for the process and reinforcing theintended use of evaluation at every step alongthe way.

6. It only makes sense to conduct evaluations whenfoundation program staff members are receptiveto outside scrutiny and critical feedback of theirprograms and willing to share and learn from theirshortcomings or missed opportunities.

SELECTING AND ENGAGING AN EVALUATOR

GETTING STARTED

After completion of a thoughtful assessment of theevaluation project at hand, and with anunderstanding of the role(s) the evaluator isexpected to play, who the stakeholders and potentialaudiences are, and whether the situation calls for ascientifically rigorous evaluation or someone towork with or coach grantees, then it’s time toconsider consultants. Remember that the evaluatorrole determines the expertise and style needed in aconsultant; the grantee role influences evaluationimplementation and use (e.g., grantees as“specimen,” evaluation customer, or participant-evaluator); and the funder “sets the stage” for theevaluation, which influences credibility, trust, andthe ultimate “value-added.” Your organization’sculture and the way it structures its relationshipswith grantees will influence how the evaluation isimplemented.

REQUESTING PROPOSALS

To select qualified consultants, two options are toissue a formal request-for-proposals (RFP) or“shop” the project around informally throughnetworks and word-of-mouth. One of the best waysto locate consulting candidates is through personal

networks and recommendations from colleagues.If that does not yield a short list of candidates, trysearching organizational list serves, such as theAmerican Evaluation Association (www.eval.org),or contacting grantmaking colleagues atGrantmakers for Effective Organizations(www.geofunders.org).

Whether proposals are requested informally orthrough an RFP, it is important to have a cleardescription of the project—its goals, expectations,available data and resources, budget, timeline—sothe evaluator can prepare a proposal. It is alwayswise to solicit a formal proposal(s) and review it(them) competitively, even if initial conversationslead you to just one potential consultant. Theproposal provides an opportunity to review thepotential consultant’s writing style and willultimately serve as the basis for the contract.

A word of advice: It helps to know your “standing”with regard to the size of the project. If the potentialcontract is small, such as a case study of onegrantee, the consultant pool will be very differentthan if you have a $150,000 evaluation budget. It’soften harder to find a consultant for the smallerjobs. Also, if you engage a larger consulting firm,you may find yourself at the bottom of the prioritylist. Be sure to inquire during the interview aboutthe consultant’s capacity to take on the project.

REVIEWING PROPOSALS

The more knowledge the project manager has aboutstandard evaluation and research methodologypractices, the easier it will be to evaluate proposalsfrom equally competent consultants. It’s a good ideato brush up on the strengths and weaknesses of themost common research methods. There are a fairlylimited number of standard data gatheringapproaches (focus groups, surveys, participantobservations) to understand. Someone who hasabsolutely no background in this area might wantto look for training opportunities or basic texts thatcan explain data gathering approaches. It isimportant to understand the techniques and modesof inquiry you are “buying” as part of a consultingcontract.

Page 7: WHEN AND HOW TO USE EXTERNAL EVALUATORS...External evaluators are increasingly being used to help foundation staff set program goals, identify lessons, make midcourse adjustments,

Association of Baltimore Area Grantmakers Page 7

INTERVIEWING POTENTIAL EVALUATORS

After sorting through the proposals and selectingthe top candidates, it is important to conductinterviews. Use the proposal as the beginning of aconversation about the nature and scope of theevaluation project. If there are things that you don’tunderstand, be sure to ask! If there are activitiesthat don’t seem necessary, say so. If the budget doesnot seem reasonable, ask for an explanation. If otherconsultants suggested strategies that appealed toyou, ask whether the proposed plan could bemodified to incorporate those additional elements.The discussion with the prospective consultant isimportant, as it will be the basis for the next stageof the process: creating a contract and refined scopeof work.

A word of warning: If the proposal isindecipherable because it’s written in “science-speak,” then the consultant’s work products maybe as well. This is not a good sign. The proposalshould be written in a language that you canunderstand and should present a coherent plan.Also, don’t be scared off because you are not “anumbers person.” Sophisticated data analyses arequite rare for most evaluations in a foundationsetting—generally, you need to understand meansand percents, not hierarchical linear modeling. Ifcomplicated analyses or methodologies areproposed, they—and the appropriateness andadvantages of their use—should be explained inthe proposal or during the interview.

√ How would you propose to involve variousstakeholders in the design and implementation ofthe evaluation?

√ What would be your ideal relationship with yourclient? How would we work together?

√ Please describe a really satisfying evaluation thatyou led. What made it satisfying for you?

Capacity

√ Do you have the time to take on this work andgive it the necessary personal attention?

√ To what extent would associates be involved inthe work? May we meet the other team members?

A Final Question

√ You’re the expert. What else should I be askingyou to help us determine if we would work welltogether and if we have basic agreement about thecontract terms and scope of work?

Competencies

√ What is your view of how and when externalevaluations are most useful?

√ Have you undertaken evaluations of initiatives(programs, projects) similar to this in the past?Please discuss.

√ From what you know so far, would you say thatthis initiative (program, project) is ready for anexternal evaluation? Why/Why not?

√ Are you familiar with any prior research, evalua-tions, studies, or publications that are relevant tothe initiative (program, project) we want toevaluate?

√ Are there accepted/proven best practices in thisfield that we can refer to or build on in ourevaluation?

√ What methodologies would you recommend for aformative (summative) evaluation of this initia-tive (program, project)?

Approach and Style

√ How would you describe your style as an evalua-tor? Do you see yourself as a proponent ofempowerment, participatory, process, or impactevaluation? Whose writing on the subject ofevaluation do you most admire and why?

INTERVIEWING EVALUATORS

From Barbara Kibbe, The David and Lucile Packard Foundation, 2000. Used with permission.

Page 8: WHEN AND HOW TO USE EXTERNAL EVALUATORS...External evaluators are increasingly being used to help foundation staff set program goals, identify lessons, make midcourse adjustments,

Association of Baltimore Area GrantmakersPage 8

But more important, the interview is a chance toget a feel for the consultant’s approach, style, andpersonality. The “perfect” consultant on paper mayturn out to be very difficult to get along with orsomeone that you might not be comfortable foistingon your grantees. Sometimes it’s better to go withthe consultant with whom you are confident youcan work rather than the one with superiorcredentials. Even if you’re very pleased with aprospective consultant, it is a good idea to ask forand check references. Also ask to see examples offinal work products. (Are they professional-lookingreports or pages stapled in the left-hand corner? Isthe writing style sophisticated yet readable?)

Some things to consider:• Formal academic preparation/specialized

training• Evaluation expertise• Evaluation philosophy or approach• Evaluator’s client base—the size of the contract

may influence the relationship or responsiveness• Capacity: Do they have sufficient staff and time

to do the work?• Track record: Check references and ask to see

samples of work products!• Geography: Will a consultant 1,000 miles away

be as responsive as a local one?• Personal style: Can you work with this group?

Can your grantees?

CONTRACT AND SCOPE OF WORK

A detailed contract is essential for both thefoundation and the evaluator. The scope of workdeveloped at the outset of the consulting projectshould be as precise as possible—with fulldescriptions of expected deliverables (includingpresentations to the staff and board), timelines, etc.This will ensure that everyone has the sameexpectations about the consulting engagement andexpected outcomes, which will be critical if for somereason the project goes awry.

The contract should also include a paymentschedule. Generally, there are three paymentoptions:

1. A periodic payment schedule. Fixed-sum pay-ments at regular intervals (generally monthly orquarterly) through the course of the project, pre-suming satisfactory progress.

2. A pay-as-you-go arrangement. Pay for each taskor component of the project as it’s completed.

3. A lump-sum payment. Usually due at the end ofthe project. Often a consultant will request apayment of a retainer up front—full payment atthe end is quite risky for the consultant.1

Regardless of which option is chosen, it is a wiseidea to tie payments to major deliverables, such asmid-year summary memos or interim reports, year-end progress reports, and the final report. This willhelp ensure timeliness and provide leverage if thingsaren’t going as planned. Contract language shouldensure that program staff has an opportunity toreview major deliverables and request modificationsif they do not meet expected standards of quality.If the evaluation team has been hired primarilybecause of the qualifications or expertise of certainstaff, then those staff should be named directly inthe contract and the funder should have the rightto terminate the contract if the team changesmarkedly and is no longer satisfactory—this willprotect the foundation from a “bait and switch.”

The terms of the agreement should be tight enoughto ensure that the client’s needs are met, but flexibleenough to ensure that mid-course corrections orchanges in the evaluation strategy are possible.Remember, evaluation is not a precise science andmodifications in the work plan may becomenecessary. Evaluations generally take more timethan anticipated; delays and obstacles are common.If possible, maintain an evaluation “reserve fund”in the budget so that if externalities requiresignificant modifications or additions to thecontract, resources are available to supportadditional research or activities.

1 From Succeeding with Consultants. Self-Assessment forthe Changing Nonprofit. Barbara Kibbe and FredSetterberg. The Foundation Center, 1992.

Page 9: WHEN AND HOW TO USE EXTERNAL EVALUATORS...External evaluators are increasingly being used to help foundation staff set program goals, identify lessons, make midcourse adjustments,

Association of Baltimore Area Grantmakers Page 9

CONFIDENTIALITY AND OWNERSHIP OF THE DATA

It is important to clarify up front how the evaluationteam will handle issues of confidentiality. If theydiscover that a grantee is bankrupt, do they tell thefoundation immediately or at all? How will thataffect the foundation’s relationship with othergrantees? It is also important to specify who hasownership of the data and to whom the informationcan be given. Release of information to outsideparties should always be cleared with appropriatestaff. Copyright ownership should be addressed inthe contract to clarify whether the consultant hasany right to publish the evaluation results.

OTHER CONSIDERATIONS—WHO IS THE CLIENT?In negotiating the relationship with the consultant,it is very important to determine and clarify forwhom the work is being done. For someevaluations, the funder is the sole client. For othersthe grantee may be the client, or it may be a three-

way partnership through which the grantee isempowered to invite the grantmaker into thelearning process to share what is and what isn’tgoing well. If the grantee is the primary client, howthe consultant is selected affects the quality of therelationship and the quality of the work—the fundershould not mandate which evaluator should behired. Rather, if the grantee is the client (perhapsthe funder is merely paying for the evaluation), thegrantee should take an active role in the selectionprocess. The funder can advise or recommendduring this stage, but ultimately it’s best to yield tothe grantee for the final decision. If you have verystrong reservations about the potential consultantthe grantee has selected, which jeopardize thenature and extent of the foundation’s support forthe project, these concerns should be discussed withthe grantee immediately.

SOME GENERAL ADVICE

Grantees should understand why the foundation isconducting the evaluation, how the findings will beused, and what level of participation or cooperationyour organization expects between the evaluationteam and their organization. Also, give them theopportunity to comment, ask questions, and voiceconcerns. The evaluation experience should not bethreatening or mysterious for your grantees! It’sunwise to have the evaluator make the first contactwith the grantees, as this might cause unnecessaryanxiety and may strain the evaluator’s ability todevelop a positive relationship with the granteeorganization, which is generally critical for success.

MANAGING THE EVALUATOR

It takes much more time than anyone may think tomanage an external evaluator or consulting team.Regular contact is a must! Make sure that afoundation staff person has the time, willingness,and “authority” to adequately manage theconsulting team. The staff should be comfortable“pushing back” if the evaluation seems off-track.

STANDARD ELEMENTS OF A CONTRACT

• Person from the foundation responsible formonitoring the contract and work

• Key reports, products, and due dates• Duties and responsibilities, including details

about the evaluation work plan• Required meetings, including attendance at

conferences or other convenings• Payment amount, payment schedule, and

requirements for payment (fees andexpenses)

• Liability release• Materials provided by the evaluator• Any special terms or conditions, including

those for terminating the contract• Data ownership and rights• Confidentiality of data• The means of resolving disputes• Any restrictions on publishing evaluation

results (copyright ownership)

Adapted from the Annie E. Casey Foundation,Program Manager’s Guide to Evaluation

Page 10: WHEN AND HOW TO USE EXTERNAL EVALUATORS...External evaluators are increasingly being used to help foundation staff set program goals, identify lessons, make midcourse adjustments,

Association of Baltimore Area GrantmakersPage 10

2 From Succeeding with Consultants. Self-Assessment forthe Changing Nonprofit. Barbara Kibbe and FredSetterberg. The Foundation Center, 1992. Used withpermission.

WHEN THE CONSULTING RELATIONSHIP

FAILS…WHAT ARE THE DANGER SIGNS?2

Barbara Kibbe and Fred Setterberg provide usefuladvice on the danger signs of a failing consultingrelationship in their book Succeeding withConsultants, some of which is excerpted below.

Deadlines are missed. Reliability is crucial. If yourconsultant lacks the time, skill, commitment, orinterest to keep up his or her side of the bargain,you’re in deep trouble. Confront this issue directly.Treat deadline inconsistency in the same way youwould deal with it from a staff member. But first,of course, make certain that you’ve provided the

consultant with all of the information and resourcesthat you had promised—and on time.

Communication is difficult. You could be callingtoo often. Or, the consultant is over-committed andcan’t keep up with his or her present tasks. Demandthat you get the full attention you’ve contractedfor—or end the relationship.

Constant changes in the work plan. As theconsultant’s road map, the work plan may need tobe renegotiated during your collaboration as newinformation is unearthed or as initial strategies proveineffective. But it should not be in constant flux.Usually the client, rather than the consultant, willbe guilty of tinkering too much with the workplan—often because of (probably unconscious)uncertainty about the desirability of change or afailure to stipulate (or comprehend) the project’sobjectives. Another variation on thiscounterproductive theme is the constant shifting ofthe consultant’s contact person inside yourorganization.

Some consulting engagements just need strongermanagement to get them back on track. Others,however, may be hopeless failures. If a poorrelationship with a consultant seems intractable, orif the products are horribly deficient or delinquent,you may want to consider terminating the contract.That’s when you’ll be thankful for strong contractlanguage that protects the foundation’s interests.

EVALUATION AS A PROCESS

Finally, keep in mind that evaluation is a processthat should not end with the report! Build in steps—and allocate resources—to implement the findingsto improve future grantmaking and to share thelessons learned!

TIPS FOR MANAGING THE EVALUATOR

• Establish a desired communication patternearly and in writing. One person should be thepoint of contact for the consulting team—thatperson should facilitate contact with others,provide advice and guidance to the team, andgather additional information and/or resourcesto support the evaluation when necessary.

• Maintain communication throughout theproject—you’ll need to know how the work iscoming along and the consultant needs to beinformed about things that arise at thefoundation.

• Develop strategies for monitoring the work ofthe consultants (not just deliverables). Youmay want to accompany the evaluation teamon select site visits. Are they organized? Dothey work well with grantees? Are they doingeverything that they said they would and in amanner with which you are comfortable?

• Seek input from grantees or other sources.Were they comfortable with the evaluator’sapproach?

• Don’t be a terrible client! Remember thatconsultants get paid for their time—everycall, meeting, and presentation “uses up”valuable contract time. Do what you can tofacilitate their work—don’t make it harder onthem. If you want or need much more thanwas anticipated, expect to PAY more for it!

Page 11: WHEN AND HOW TO USE EXTERNAL EVALUATORS...External evaluators are increasingly being used to help foundation staff set program goals, identify lessons, make midcourse adjustments,

Association of Baltimore Area Grantmakers Page 11

Some types of evaluation:Formative evaluation—Evaluation undertaken in theinitial stages of program operation to inform programdevelopment and implementation. Formative evaluationaddresses how the program is working and how it mightbe improved. Often, formative evaluation explores al-ternative program approaches to see which variant worksbest.

Summative evaluation—Evaluation of the overall worthof the program after it is in operation.

Participatory evaluation—Evaluation that seeks to ac-tively involve all stakeholders in the evaluation pro-cess. Participatory evaluation seeks to create an egali-tarian process in which all stakeholders’ perspectivesare considered and in which the evaluation results areuseful and relevant to the stakeholders.

Empowerment evaluation—Evaluation in which evalu-ators (also known as coaches) work to enhance the par-ticipants’ evaluation capacities and help participantsapply their knowledge of the program to the evaluation.Empowerment evaluation uses evaluation tools to helpparticipants reach decisions and identify areas needingchange.

Process evaluation—Evaluation that describes imple-mentation of a program—that is, the activities under-taken as part of a program. A process evaluation canprovide information on a program’s structure, proce-dure, and accomplishments; problems encountered inthe program implementation; and how the problems wereresolved or not.

Outcome evaluation—Evaluation focused on the ben-efits or changes occurring in program participants afterprogram participation.

Impact evaluation—Evaluation focused on the effectof a program with the intention of being able to attributeany observed changes to the program. Attributing im-pact solely to the program requires certain evaluationdesigns that control for other factors that might affectthe outcome of interest (see “Control/ComparisonGroup” and “Randomized Trial Design”).

Cluster evaluation—Projects are designated as a clus-ter because they support similar strategies, serve thesame target population, or collectively address a spe-cific outcome, such as systemic institutional or policychanges. Cluster evaluations examine such a cluster ofprojects to determine how well they achieve the broadgoals of a programming initiative. (Kellogg Founda-tion)

EVALUATION TERMINOLOGYActivities/StrategiesWhat a program or organization does with its resourcesto fulfill its mission, such as sheltering and feedinghomeless families.

AttributionThe process of identifying the single or multiple factorsresponsible for an observed outcome; for example,program participation reduced stress by 50 percent.

Benchmark/Performance TargetDesired level of an indicator, possibly within a giventime frame—for example, reduce child abuse by 25percent by the year 2000.

Conceptual Framework/ModelPictorial representation of the most critical elementsunderlying a program’s operation and expectedoutcomes. A conceptual framework identifies criticalareas for evaluation.

Context/Contextual FactorsFactors outside the control of a program or project thatmight have an impact on its effectiveness—for example,unique characteristics of a particular community, theeconomy, new federal or state legislation.

Control/Comparison GroupIn an evaluation involving two or more groups, the groupthat does not receive the full program or treatment thatis the focus of the evaluation. A control group receivesno program. A comparison group may receive someprogram, but not the program being evaluated.

CorrelationThe co-occurrence, either simultaneously orsequentially, of two events or characteristics that mayor may not be causally related to one another—forexample, hiring a new director and an increase inemployee satisfaction.

EvaluationThe systematic collection of information about theactivities, characteristics, and outcomes of a programto make judgments about the program, improve programeffectiveness, and/or inform decisions about futureprogramming. (Michael Quinn Patton)

Page 12: WHEN AND HOW TO USE EXTERNAL EVALUATORS...External evaluators are increasingly being used to help foundation staff set program goals, identify lessons, make midcourse adjustments,

Association of Baltimore Area GrantmakersPage 12

improved communication or increase in number ofclients served.• Short-term outcomes: outcomes that can be

reasonably expected within the first one to twoyears of project or program

• Intermediate outcomes: outcomes expected inthree to five years

• Long-term outcomes: outcomes expected in morethan five years

OutputsDirect products of program activities. Usually measuredin terms of volume of work accomplished, such asnumber of classes taught.

Qualitative TechniquesMeasures that provide descriptive information aboutsituations, events, and/or behavior of individuals.Qualitative techniques focus on what the programexperience means to participants in their own words,through, for example, interviews, observations,document analysis, or focus groups.

Quantitative TechniquesMeasures that strive for precision by focusing on thingsthat can be counted—for example, data fromquestionnaires, tests, standardized observationinstruments, program records.

Randomized Trial DesignEvaluation design in which potential programparticipants are randomly assigned to either theexperimental or the control group. It is the strongestdesign for showing program impact; however, it is notapplicable for many programs.

Results-Based Accountability (RBA)Management and monitoring of projects and programsfocused on the desired outcomes of the programs ratherthan on activities and inputs.

ValidityExtent to which a data collection technique measureswhat it is intended to measure—especially important inidentifying indicators for outcomes.

EVALUATION TERMINOLOGY (CONTINUED)

Experimental GroupIn an evaluation involving two or more groups, the groupthat receives the program or treatment being evaluated.

GeneralizabilityExtent to which evaluation findings can be applied tosimilar programs in other settings.

GoalBroad, general statement of purpose. A goal can alsobe more closed-ended and specify the long-term intendedresult of the project or program—for example,increasing the participation of minorities in highereducation.

IndicatorMeasure of an outcome—for example, the proportionof the population with high blood pressure.

InputsResources dedicated to or consumed by the program,such as money, staff, and facilities.

Logic ModelPictorial representation of how a program is expectedto work. Logic models show the theory underlying theprogram by linking assumptions/principles and/orresources with activities/processes and short- and long-term outcomes.

MilestoneA critical point that a project must reach to ensure thatit is on course in achieving its stated outcomes.

ObjectiveSpecific, measurable, time-bound operational statementof the desired accomplishments of the program that arenecessary to achieve its goal. Example: Increase thenumber of minority students taking four years ofmathematics in secondary school.

OutcomeBenefit or change resulting from an action. Often usedto refer to benefits to individuals, families, communities,or larger populations during or after programparticipation (for example, reduction in the number oflow-birthweight babies). Can also refer to change insystems, organizations, or programs—for example,