The Challenge of Assessing Policy and Advocacy … · Guiding Principles for Policy Change...

58
OCTOBER 2005 Funded by and prepared for: The Challenge of Assessing Policy and Advocacy Activities: Strategies for a Prospective Evaluation Approach

Transcript of The Challenge of Assessing Policy and Advocacy … · Guiding Principles for Policy Change...

Page 1: The Challenge of Assessing Policy and Advocacy … · Guiding Principles for Policy Change Evaluation 12 ... have been crafting new evaluation ... practiced methodology or widely

OC

TO

BER

2005

Funded by and prepared for:

The Challenge of Assessing Policy and Advocacy Activities:Strategies for a Prospective Evaluation Approach

Page 2: The Challenge of Assessing Policy and Advocacy … · Guiding Principles for Policy Change Evaluation 12 ... have been crafting new evaluation ... practiced methodology or widely

OCTOBER 2005

Researched and written by:Kendall Guthrie, Justin Louie, Tom David and Catherine Crystal FosterBlueprint Research & Design, Inc.

Page 3: The Challenge of Assessing Policy and Advocacy … · Guiding Principles for Policy Change Evaluation 12 ... have been crafting new evaluation ... practiced methodology or widely

1FOREWORD

Foreword fromThe California Endowment

During the last few years, The California Endowment has placed increased focus on policy and advocacy work. We believe that for many societal challenges, creating policy change can have a more systemic and lasting effect on improving the health care of Californians than solely funding direct services. However, we have discovered that documenting the impact of the foundation’s work in this arena is complex, and the unpredictable nature of the policy environment poses unique challenges to the classic program evaluation strategies honed by assessing direct services.

Therefore, The Endowment asked Blueprint Research & Design, Inc. to gatherinformation from evaluation experts across the country and the foundation’s ownstakeholders to guide us in developing an understanding about the issues in policychange evaluation, and recommend an approach to strengthening the foundation’s public policy change evaluation practice. Via in-depth interviews, stakeholders identified their needs, opinions and priorities around strengthening the foundation’s policy change evaluation practice. Interviews with evaluation experts provided advice on conducting effective policy change evaluation in a foundation setting,and identified tools and written reports that might inform the foundation in developing an evaluation strategy.

As we continue to refine our evaluation approach in order to better understand theeffectiveness of our grant making, as well as to help grantees improve their own work, we want to share the early learnings from this “state of the field” report. We invite you to share any thoughts and reactions on this important issue.

Robert K. Ross, M.D.President and CEOThe California Endowment

Page 4: The Challenge of Assessing Policy and Advocacy … · Guiding Principles for Policy Change Evaluation 12 ... have been crafting new evaluation ... practiced methodology or widely

Funders and nonprofits involved in policy

and advocacy work need more effectiveways to assess whether their efforts are

making a meaningful difference.

2 THE CHALLENGE OF ASSESSING POLICY AND ADVOCACY ACTIV IT I ES

Page 5: The Challenge of Assessing Policy and Advocacy … · Guiding Principles for Policy Change Evaluation 12 ... have been crafting new evaluation ... practiced methodology or widely

Table of Contents

3TABLE OF CONTENTS

Part I: Background & Overview 4Introduction 4Assumed Priorities for Policy Change Evaluation 6Challenges in Evaluating Policy and Advocacy Grants 7The Current State of Foundation Practice on Policy Change Evaluation 10Guiding Principles for Policy Change Evaluation 12

Part II: A Prospective Approach to Evaluation 14Introduction 14Benefits of a Prospective Approach 15Step 1: Develop a Conceptual Model for the Policy Process 16

Understanding the Policy, Social Change and Advocacy Processes 17Conceptual Frameworks 18

Social Change Model 19Policy Change Model 19Advocacy Model 21

Step 2: Develop a Theory of Change 21Practical Considerations in Developing a Theory of Change 22

Step 3: Define Benchmarks and Indicators 25Capacity-Building Benchmarks 28Frameworks for Benchmark Development 28Practical Considerations in Selecting Benchmarks 31

Step 4: Collect Data on Benchmarks 34

Conclusion 37

Appendix A: Methodology 41

Appendix B: Bibliography 43

Appendix C: Examples of Retrospective Policy Change Evaluations 49

Appendix D: Comparison of Frameworks to Guide Benchmark 50Development in Policy Change Evaluation

Appendix E: Sources and Details for Benchmark Frameworks 52

Page 6: The Challenge of Assessing Policy and Advocacy … · Guiding Principles for Policy Change Evaluation 12 ... have been crafting new evaluation ... practiced methodology or widely

IntroductionFoundations and nonprofits today face a growing need to measure the outcomes of their efforts to improve society. For themajority of nonprofits that focus on directservices and their funders, measuringimpact has moved beyond merely countingthe number of people served towardunderstanding the relevant changes inservice recipients’ attitudes, awareness andbehaviors. Documenting the connectionsbetween increased services funded by afoundation and increased social changehas been relatively straightforward.

Funders and nonprofits involved in policyand advocacy work, however, oftenstruggle over ways to assess whether theirhard work made a meaningful difference.It can take years of building constituencies,educating legislators and forging alliancesto actually change public policy. Therefore,foundations, which typically fund inincrements of one-to-three years, havedifficulty assessing the progress of granteespursuing policy work. Moreover,foundations often want to know

what difference their money made.Yet the murky world of public policyformation involves many players, teasing out the unique impact of any one organization is almost impossible.

During the last few years, a handful of foundations and evaluation experts have been crafting new evaluationmethodologies to address the challengesof policy change work. Many morefoundations and nonprofits are eager for help. At this moment in time, whatmethods, theories and tools look promisingfor evaluating policy change efforts?

Originally prepared for The CaliforniaEndowment, this paper presents arecommended approach to policy changeevaluation. The foundation, one of thelargest health funders in the country, isincreasingly funding policy change andadvocacy work. This research began as an effort to inform the foundation’sevaluation planning around those issues.However, The Endowment’s goals andchallenges are not unique. The analysis

4 THE CHALLENGE OF ASSESSING POLICY AND ADVOCACY ACTIV IT I ES

Part I:Background and Overview

Page 7: The Challenge of Assessing Policy and Advocacy … · Guiding Principles for Policy Change Evaluation 12 ... have been crafting new evaluation ... practiced methodology or widely

and proposed strategies addressed hereshould prove useful to a variety offoundations, evaluators and policyanalysts with an interest in advocacy and policy change.

The paper is designed to outline anapproach to policy change evaluationgrounded in the experience of experts andfoundation colleagues. (See Appendix Afor the research methodology.) Thispaper first posits three key priorities inevaluating policy change work, drawnfrom interviews with grantees and stafffrom The California Endowment on their needs concerning policy changeevaluation. It also discusses thechallenges inherent in monitoring andassessing these types of grants. It then

provides a brief description of the “stateof the practice” in evaluating policychange and advocacy work within thefoundation community. Because thepractice of evaluating a foundation’simpact on policy advocacy is fairly new,our review did not unearth a commonlypracticed methodology or widely usedtools. However, seven widely agreedupon principles of effective policy change evaluation did emerge.

Based upon these seven principles andthe advice of national evaluation experts,we recommend a prospective approach to policy change evaluation. Part II of the paper is devoted to outlining thisrecommended approach. A prospectiveapproach would enable a funder to define

5PART I : BACKGROUND AND OVERVIEW

Clarification of TermsGoals vs. Outcomes – For the purposes of this paper, goals are defined as

broader, long-term visions of change, whereas outcomes are the incremental

changes that indicate progress toward the larger goal.

Benchmarks vs. Indicators – Benchmarks are the outcomes defined in advance

as those the evaluation is monitoring to see change or progress. Indicators are

the specific ways in which the benchmarks will be measured. For example, if

one of the benchmarks for a project is broader media exposure for the issue,

one of the indicators may be the number of earned media pieces in major

media outlets.

Theory of Change vs. Logic Model – Theories of change broadly describe a

project, linking its activities or strategies with desired outcomes, and in particular

describing the how and why of those linkages. Logic models tend to be more

graphical in nature and attempt to map out all elements of a program, though

they tend not to describe how and why the pieces are interconnected. Logic

models are more useful for describing an existing program than they are useful

for shaping new programs.

Page 8: The Challenge of Assessing Policy and Advocacy … · Guiding Principles for Policy Change Evaluation 12 ... have been crafting new evaluation ... practiced methodology or widely

1 Note that these institutional priorities are distinct from the specific needs and desires of individual stakeholders, as identified inthe needs assessment.

expectations from the start of a grant,monitor progress and provide feedback for grantees on their work as the program is operating. This paper outlines keycomponents of this prospective approach,namely setting up an evaluationframework and benchmarks prospectively,at the beginning of a grant, and thenassessing progress and impact along theway. As well, this paper highlights thearticles, manuals and expert advice mostrelevant and useful in developing thisprospective approach.

Key steps in a prospective evaluationapproach, as discussed in this paper, include:

! Adopting a conceptual model for understanding the process ofpolicy change;

! Developing a theory about how and why planned activities lead to desired outcomes (often called a “theoryof change”);

! Selecting benchmarks to monitor progress; and

! Measuring progress toward benchmarksand collecting data.

Assumed Priorities for Policy ChangeEvaluationTo guide the research on strategies forpolicy change evaluation, Blueprintbegan by interviewing The Endowment’sgrantees and staff to understand theirneeds, challenges and priorities regardingthis issue. This stakeholder needs

assessment identified three broadpriorities for policy change evaluation.1

It is likely that these priorities are widelyshared by other foundations, and ourrecommended approach to policy changeevaluation assumes so.

1. Setting Grant Goals andMonitoring Progress:When a grant starts, nonprofits andfoundation staff need to agree upon goals for the grant. The grantee agrees to be accountable for working towardsthese goals. Particularly for one-yeargrants, actual policy change is very rarely a reasonable grant goal, eventhough it is the ultimate goal for both the grantee and funder. Therefore,grantees and program staff need toidentify how grantees expect their work to contribute to policy change over the long run, but also identify agrant goal that will demonstrate progresstowards the ultimate goal—withoutholding the grantee directly accountablefor changing the policy. Both wantmilestones that indicate actual change in the policy environment (e.g., increasedlegislators’ awareness of an issue), not justan inventory of activities or processindicators (e.g., number of flyers mailedto legislators). This goal identificationprocess is especially challenging for thegrant-making staff and the direct serviceproviders with less experience in policychange work. The challenge is heightenedby some grantees’ concern that they andthe funder will not see eye-to-eye onwhat is both achievable and impressiveenough to be grant-worthy.

6 THE CHALLENGE OF ASSESSI NG POLICY AND ADVOCACY ACTIV IT I ES

Page 9: The Challenge of Assessing Policy and Advocacy … · Guiding Principles for Policy Change Evaluation 12 ... have been crafting new evaluation ... practiced methodology or widely

2. Assessing Impact at the Grantee,Program and Foundation Level:Both funders and grantees want moreeffective ways to assess and document theirimpact. Grantees want to show theirfunders, donors, colleagues and constituentsthat they are making a difference. Funderswant to document for their board membersand the public that their grant dollars werespent wisely. Since actual public policychange takes a long time, both granteesand program officers want methods toassess their contributions as they movealong, not just when (or if) a policyvictory is achieved. This requires a moresophisticated way to think about impactin the policy arena. Both grantors andgrantees need new ways to show how they are building up to or making theenvironment more conducive to theirdesired policy change, or laying thegroundwork for successful implementationof the policy change—before the policychange actually occurs.

3. Improving Programs and DevelopingKnowledge about Effective Strategies:Documenting impact is valuable, especiallyto the funder and to a grantee that wantsto replicate successes and raise more money.In addition to assessing a program’s impact,information about what worked and didnot work can be valuable to a wideraudience. This information can helpboth foundations and their granteesimprove the specific project and identifystrategies that can help other advocatesincrease their effectiveness. Sharing theselessons with colleagues outside thefoundation can also serve to strengthenthe field of philanthropy by improving

the knowledge base on which grant-makingdecisions are made. It can even helppolicymakers in the future, by documentingmodels of positive and effective relationshipswith advocacy organizations.

Challenges inEvaluating Policy andAdvocacy Grants For the last 20 years, the social scientificmethod has served as the dominantparadigm for evaluations in the foundationworld. A social services program identifiesits change goal and the three-to-fourfactors or inputs that will stimulate that change. The program, for the mostpart, assumes there is a direct, linear, causal relationship between the inputsand the desired effect. Evaluation is thenabout quantifying those factors andmeasuring the impact.

However, as foundations have directed an increasing proportion of their grantdollars to public policy and advocacywork, they are finding that theirtraditional evaluation approaches do not work well. The seven keychallenges are:

! Complexity! Role of External Forces! Time Frame ! Shifting Strategies and Milestones! Attribution! Limitations on Lobbying! Grantee Engagement

The challenges in evaluating policy andadvocacy have led both to foundations’reluctance to fund policy and advocacy

7PART I : BACKGROUND AND OVERVIEW

Page 10: The Challenge of Assessing Policy and Advocacy … · Guiding Principles for Policy Change Evaluation 12 ... have been crafting new evaluation ... practiced methodology or widely

2 See Kingdon, J. Agendas, Alternatives and Public Policies. HarperCollins: New York. 1995.

work as well as an absence of documentedevaluations, even among those foundationsthat do fund advocacy. The emergingcommunity of practice around policychange evaluation is trying to findstrategies to address these challenges.

ComplexityPolicy and advocacy grantees are trying toadvance their goals in the complex andever-changing policy arena. Achievingpolicy change usually involves much morethan just getting legislators to change asingle law. In fact, effective policy workoften doesn’t even involve changing a law.It involves building a constituency topromote the policy change—and monitorits effective implementation, once achieved.It often requires developing research to assess the impact of different policyproposals. It requires public awarenesscampaigns and closed-door deal-makingwith elected officials. Sometimes, policysuccess even means the absence of change,when, for example, advocates are tying to stop policy changes they feel would be detrimental. Therefore, the path topolicy change is complex and iterative.In determining what actions will createchange or how to assess progress, linearcause and effect models are not particularlyhelpful in trying to understand thenonlinear dynamics of the system.

Role of External ForcesNumerous players and dynamics outsidethe grantee organization, such as anopposition organization or the politicaland economic environment, heavilyshape policy and advocacy work. Unlikesocial services grantees, policy grantees

often face players actively working tothwart their efforts. The influence ofthese external forces is hard to predictand often impossible for a grantee tocontrol. It is more appropriate to hold a direct service grantee accountable forachieving certain outcomes because theyhave much more control over the keyfactors that influence their ability toachieve those outcomes—primarily thequality of their program. In contrast,policy grantees can do everything withintheir control “right” (e.g., generate goodmedia coverage, speak with legislators,provide good research) and still notachieve their goal because of factors sucha change in the party in control of thelegislature or an economic downturn. In advocacy work, a grantee’s efforts canbuild potential for a policy outcome, butalone cannot necessarily achieve aspecific outcome. Change often happenswhen an opportunity window opens.2

Time FrameThe length of time necessary to achieveinterim and ultimate goals differs forpolicy change evaluations. Policy goalsusually are long-term, beyond the horizonof a typical one-to-two-year grant. In mostcases, there will be little or no actual publicpolicy change in one year. It is thereforeinappropriate to measure the effectivenessof most foundations’ one-year policy andadvocacy grants by the yardstick, “Didpolicy change?” A more useful questionmight be “How did the grantee’s workimprove the policy environment for thisissue?” Or “How successful was the granteein taking the necessary steps toward thepolicy change?” The field needs new ways

8 THE CHALLENGE OF ASSESSING POLICY AND ADVOCACY ACTIV IT I ES

Page 11: The Challenge of Assessing Policy and Advocacy … · Guiding Principles for Policy Change Evaluation 12 ... have been crafting new evaluation ... practiced methodology or widely

to think about appropriate outcomes thatthat can mark progress toward the ultimatepolicy goal but are achievable within thetime frame of a grant.

Shifting Strategies and MilestonesSince dynamics in the policy arena canchange quickly, advocates must constantlyadjust their strategies to fit the currentenvironment. In policy change evaluation,it is challenging to choose short-termoutcomes and benchmarks at the onset ofa grant and measure progress against thembecause the grantees’ strategies may needto change radically over the course of thegrant—even as their ultimate policy goalremains the same or similar. This is partlydue to the influence of external forces andcomplexity of policy advocacy, discussedabove. It requires discipline, attentionand a deep understanding of the issuesand the policy environment to craft anapproach and a set of goals that are flexiblewithout being merely reactive or haphazard.

AttributionIt is also important to note that mostpolicy work involves multiple playersoften working in coalitions and, in fact,requires multiple players “hitting” numerousleverage points. In this complex system,it is difficult to sort out the distinct effectof any individual player or any singleactivity. Isolating the distinct contributionsof the individual funders who supportedparticular aspects of a campaign is even more difficult. In the policy advocacyworld, there is no way to have a “controlgroup.” Neither the grantee nor thefunder can ever measure how the advocacyeffort would have played out if the grantee

wasn’t there, if a specific activity didn’thappen or if one element of funding weremissing or allocated differently. Evenwhen an evaluator has the luxury ofinterviewing all key stakeholders involvedin a policy development, there’s rarely, ifever, clear agreement on why somethingturned out the way it did. That can be frustrating to funders who want topinpoint their influence.

Limitations on LobbyingMisperceptions regarding the federalguidelines on nonprofit lobbying, as well as a foundation’s desire to avoid any chance of regulatory trouble, createchallenges when trying to assess a funder’srole in advocacy work. Federal regulationsconstrain foundations’ ability to lobbydirectly and to provide financial supportfor nonprofits to lobby on specific piecesof legislation (although the limitationsare not as strict as many foundation staffand board members believe.) Therefore,nonprofits ask foundations to supportother aspects of their policy work, such as public education, research or generalcommunity organizing. Traditionally,foundations have wanted grantees to drawvery clear lines between their activitiesrelated to lobbying for legislation andwork supported by the foundation.However, because foundation-supportedactivities are usually part of an overallcampaign for policy change, programofficers find it hard to understand thevalue of the work they supported withoutunderstanding how it contributed to thelarger policy change goals. In many cases,it makes more sense to evaluate the entirecampaign or the work of an advocacy

9PART I : BACKGROUND AND OVERVIEW

Page 12: The Challenge of Assessing Policy and Advocacy … · Guiding Principles for Policy Change Evaluation 12 ... have been crafting new evaluation ... practiced methodology or widely

organization overall rather than the distinctcomponent funded by a foundation.However, many program officers expressedconcern that if grantees submit reportsthat describe how their foundation-fundedwork fits together with lobbying efforts toachieve policy change, the foundationmight get into legal trouble or alienatefoundation leadership. Continuouslyeducating the foundation and nonprofitcommunities of the nature of the relativelygenerous federal limits on lobbying wouldhelp mitigate this problem.

Grantee EngagementGrantee participation, commitment,enthusiasm and capacity are essential to collecting quality evaluation data.Experienced advocates have manyinformal systems for gathering feedbackabout their tactics as they move along,usually based heavily on feedback fromtheir peers, and sometimes based onquantitative measures they have developedinternally. However, most policy andadvocacy grantees have little experienceor expertise in formal evaluation. Theyhave limited existing staff or logisticalcapacity to collect data. The data anexternal evaluator might want them tocollect may seem burdensome, inappropriateor difficult to obtain, even if they docollect some data in their day-to-daywork. Moreover, for many nonprofits,their past experiences with evaluationhave often been less than satisfying.Based on these experiences, there is ageneral concern in the nonprofit fieldabout evaluation. Many feel thatevaluations occur after a grant is over and come as an additional, often

unfunded, burden that provides littledirect benefit to the nonprofit. Oftenthey never see a final report, and if theydo, it doesn’t contain information thathelps them improve their work, theinformation comes too late to put it to use, or they have no funding orcapacity to act on it.

The Current State ofFoundation Practice on Policy ChangeEvaluationFoundations have a long history ofevaluating the impact of policy changes,as exemplified by the many studies on the impact of welfare reform. However,numerous experts noted even three yearsago few funders were trying to evaluatetheir efforts in creating or supportinggrantees to create policy change. Thislack of evaluation was due in part to thechallenge in measuring policy advocacy, asdiscussed above and, in many cases, a desireto keep their involvement in this morepolitically controversial arena low profile.

There is now an increased desire from anumber of funders to engage in a morerigorous unpacking of all the goals andtactics that grantees have used and examinewhat really was effective in achievingpolicy change. However, after conducting20 interviews with evaluation experts andreviewing a broad sampling of relevantreports, we concur with the consensusfrom a recent Grantmakers in Healthgathering of more than 50 funders ofadvocacy. They concluded that there isno particular methodology, set of metricsor tools to measure the efficacy of advocacy

10 THE CHALLENGE OF ASSESSING POLICY AND ADVOCACY ACTIV IT I ES

Page 13: The Challenge of Assessing Policy and Advocacy … · Guiding Principles for Policy Change Evaluation 12 ... have been crafting new evaluation ... practiced methodology or widely

grant making in widespread use. In fact,there is not yet a real “field” or “communityof practice” in evaluation of policyadvocacy.3 The field (such as it is) iscomposed of a number of individualpractitioners and firms who are employinggeneric skills to design evaluations on aproject-by-project basis. There appears to be little communication betweenpractitioners, and there is no organizedforum to share their ideas.

In trying to assess the state of policychange evaluation, we were particularlychallenged by the lack of writtendocumentation. Despite a substantialamount of practical literature onevaluation, very little of it directlyaddresses the specific challenge ofmeasuring policy change and advocacy.Much of what is known about how bestto approach this challenge exists only inthe heads of a few practitioners andresearchers. The written documentationlies largely in internal and often informalevaluation reports, which foundations donot commonly share.

It is noteworthy that three of the tools or manuals we identified as most valuablefor foundations interested in evaluatingpolicy advocacy were published in the last year. (These are the Making the Casetool from the Women’s Funding Network,Investing in Change: A Funder’s Guide to Supporting Advocacy from Alliance for Justice, and the Practical Guide toDocumenting Influence and Leverage inMaking Connections Communities from the Annie E. Casey Foundation.)

While practitioners do not share acommonly practiced methodology, there did emerge from our review at least seven widely held principles forengaging in effective policy changeevaluation. The prospective approachoutlined in this paper is built on these principles.

11PART I : BACKGROUND AND OVERVIEW

3 McKinsey and Company describes a community of practice as “A group of professionals, informally bound to one another throughexposure to a common class of problems, common pursuit of solutions, and thereby themselves embodying a store of knowledge.”http://www.ichnet.org/glossary.htm.

Page 14: The Challenge of Assessing Policy and Advocacy … · Guiding Principles for Policy Change Evaluation 12 ... have been crafting new evaluation ... practiced methodology or widely

12 THE CHALLENGE OF ASSESSING POLICY AND ADVOCACY ACTIV IT I ES

Guiding Principles for Policy Change Evaluation

1. Expand the perception of policy work beyond state and federal legislative arenas.

Policy can be set through administrative and regulatory action by the executivebranch and its agencies as well as by the judicial branch. Moreover, some ofthe most important policy occurs at the local and regional levels. Significantpolicy opportunities also occur during the implementation stage and in themonitoring and enforcement of the law or regulation.

2. Build an evaluation framework around a theory about how a group’s activities are expected to lead to its long-term outcomes.

Often called a theory of change, this process forces clarity of thinkingbetween funders and grantees. It also provides a common language andconsensus on outcomes and activities in a multi-organization initiative.

3. Focus monitoring and impact assessment for most grantees and initiatives on the steps that lay the groundwork and contribute to the policy change being sought.

Changing policy requires a range of activities, including constituency andcoalition building, research, policymaker education, media advocacy andpublic information campaign. Each activity contributes to the overall goalof advancing a particular policy. Outcomes should be developed that are relatedto the activity’s contribution and indicate progress towards the policy goal.

4. Include outcomes that involve building grantee capacity to become more effective advocates.

These should be in addition to outcomes that indicate interim progress.These capacity improvements, such as relationship building, create lastingimpacts that will improve the grantee’s effectiveness in future policy andadvocacy projects, even when a grantee or initiative fails to change thetarget policy.

5. Focus on the foundation’s and grantee’s contribution, not attribution.

Given the multiple, interrelated factors that influence the policy process andthe many players in the system, it is more productive to focus a foundation’sevaluation on developing an analysis of meaningful contribution to changesin the policy environment rather than trying to distinguish changes thatcan be directly attributed to a single foundation or organization.

6. Emphasize organizational learning as the overarching goal of evaluation for both the grantee and the foundation.

View monitoring and impact assessment as strategies to support learningrather than to judge a grantee. In an arena where achieving the ultimategoal may rarely happen within the grant time frame, and public failures are more frequent, emphasizing learning should encourage greater granteefrankness. It should also promote evaluation strategies and benchmarks that generate information valuable to both the grantee and funder,increasing grantee buy-in and participation. Finally, the foundation will be able to document more frequent “wins” in learning than inachieving policy change.

7. Build grantee capacity to conduct self-evaluation.

Most advocacy organizations have minimal experience or skills in moreformal evaluation methods. To date, most have relied primarily oninformation feedback from their extensive network of peers to judge their effectiveness and refine their strategies. To increase their use of formal evaluation processes, grantees will need training or technical assistance as well as additional staff time to document what actuallyhappened. This additional work should help the nonprofit become morereflective about its own work, as well as provide more useful informationabout change to funders.

Page 15: The Challenge of Assessing Policy and Advocacy … · Guiding Principles for Policy Change Evaluation 12 ... have been crafting new evaluation ... practiced methodology or widely

Prospective evaluation can help a funder

monitor progress of a grant and enablethe grantee to use evaluation information

to make improvements in the project.

13PART 1: BACKGROUND AND OVERVIEW

Page 16: The Challenge of Assessing Policy and Advocacy … · Guiding Principles for Policy Change Evaluation 12 ... have been crafting new evaluation ... practiced methodology or widely

IntroductionEvaluations can be conducted as eitherbackward-looking or forward-looking. A backward-looking—or retrospective—evaluation may collect data throughoutthe life of a project, but analysis andpresentation of findings occur near theend or at the conclusion of a project,generally summarizing the actions, impactand lessons learned from the work. Mostpolicy change evaluations conducted to date have taken a retrospectiveapproach—either of individual grants,initiatives or clusters of similar grants.See Appendix C for examples ofretrospective policy change evaluations.

Retrospective evaluations can be veryuseful for understanding what hashappened in policy change. Often,participants are more able to identify in hindsight which events made the most important contribution to changeand the influence of environmental issues which may not have been asobvious in the moment. However,

retrospective evaluation has severaldrawbacks. First, findings come after aproject is completed, so they have limitedvalue in helping the grantee or programofficer refine strategies along the way.Also, there is a natural desire forparticipants to want to put a positive spin on their work, which inhibits theirinclination to remember changes instrategies or aspects that didn’t work out as planned.

In contrast, a prospective evaluation sets out goals for a project at the outsetand measures how well the project ismoving toward those goals throughoutthe project’s life. Unlike retrospectiveevaluation, prospective evaluation can help a funder monitor the progress of a grant and allow the grantee to use evaluation information to makeimprovements in its program as theprogram is in progress. Like retrospectiveevaluation, prospective evaluation isuseful for looking at efforts at the grantee, initiative or program level.

14 THE CHALLENGE OF ASSESSING POLICY AND ADVOCACY ACTIV IT I ES

Part II: A ProspectiveApproach to Evaluation

Page 17: The Challenge of Assessing Policy and Advocacy … · Guiding Principles for Policy Change Evaluation 12 ... have been crafting new evaluation ... practiced methodology or widely

In brief, prospective evaluation involvesfour steps:

1. Agree upon a conceptual model for thepolicy process under consideration.

2. Articulate a theory about how and whythe activities of a given grantee, initiativeor foundation are expected to lead to theultimate policy change goal (often calleda “theory of change”).

3. Use the “theory of change” as aframework to define measurablebenchmarks and indicators for assessingboth progress towards desired policychange and building organizationalcapacity for advocacy in general.

4. Collect data on benchmarks to monitorprogress and feed the data to grantees and foundation staff who can use theinformation to refine their efforts.

Finally, at the end of the project, all ofthe progress should be reviewed to assessoverall impact and lessons learned.

Benefits of a Prospective Approach A prospective approach can be mosteffective in policy change evaluation forseveral reasons. It will help the foundation’sstaff and its grantees:

! Define expectations from the start.Both grantees and program officersexpressed a strong need for guidance in developing more clear expectationsabout outcomes for policy and advocacyprojects from the beginning of arelationship. Prospective evaluationmethods set up expected outcomes

at the beginning of the project,allowing stakeholders to clarifyexpectations upfront.

! Monitor progress. Program officersstated they want more effectivestrategies for monitoring the progress of policy advocacy grantees. Theprospective approach would helpprogram officers tell if a project is on or off track and enable them to describeto the foundation board what progresshas been made, short of actual policy change, which may be years off. A prospective evaluation can deal with the challenge of the long timeframerequired for policy change. By showinghow short-term outcomes are leading to long-term policy change, a fundercan document some impact from itsgrant making long before actual policy change occurs.

! Provide feedback to refine projects.A key priority for grantees and programofficers overseeing multiple grants is gathering information to refine their projects and make mid-coursecorrections in their strategies. As one program officer states, “We wanted evaluation results as they happen, not just at the end.That’s what we need to refine ourprogram.” A prospective evaluation is designed to provide insights as aproject progresses, when it can be used to strengthen a program, ratherthan just at the end of a grant.

! Promote grantee engagement in theevaluation process. Evaluations thatare able to provide timely findings

15PART I I : A PROSPECTIVE APPROACH TO EVALUATION

Page 18: The Challenge of Assessing Policy and Advocacy … · Guiding Principles for Policy Change Evaluation 12 ... have been crafting new evaluation ... practiced methodology or widely

are more useful to the grantee andconsequently grantees are more eager to help provide good data. Manygrantees mentioned that retrospectiveevaluations, which often provideinformation after a program is completed,are rarely useful to a grantee and seemas though they are only for a funder’sbenefit. Said one, “External evaluationstend to take a long time and for agrowing organization, by the time theevaluation was done, we had moved on.”

! Support program planning andimplementation. The conceptualmodels and benchmarks for prospectiveevaluation also help foundation staffand grantees clarify their thinkingabout how their activities work togetherto achieve impact. This can increasethe program’s effectiveness beforeevaluation data even come in. Explainedone grantee, “What I am interested inis evaluation for the sake of strategicdevelopment, to inform planning. This is much more valuable, but moredifficult, than traditional evaluationthat focuses on reporting whathappened.” In fact, one foundationprogram officer said the foundationalways viewed its efforts to developevaluation frameworks as supportingboth planning and evaluation.

! Increase transparency. By setting goalsup front and making them transparentto all parties, prospective evaluationcan be more thorough and is less likelyto produce a self-serving evaluation. In retrospective evaluation, granteesand funders have a natural tendencyde-emphasize or overlook what went

wrong and to put a good spin on pastevents. In fact, participants may noteven remember the ways in which goals changed over time.

! Promote a learning culture. Prospectiveevaluation can demonstrate to granteeshow systematic and well-definedfeedback can help them improve theirpractice. Often, it is only throughexperience that grantees are able to seethe usefulness of evaluation feedback.As grantees begin to see the impact ofthis knowledge, they may make shifts to instill evaluation practice into theirorganizational culture.

The next section outlines the four basicsteps in conducting a prospective evaluation,and includes practical advice with eachstep, drawn from the interviews withevaluators, foundation staff and grantees.

Step 1: Develop a Conceptual Modelfor the Policy ProcessProspective evaluation begins with allparties—advocates, funders (and outsideevaluators, if used)—developing a clearunderstanding of the environment inwhich advocates are going to work, thechange that the grantees and the funderwant to make in the environment, andhow they intend to make that changehappen. Explained one evaluationexpert, “They need to describe theproblem, they need to show data on theproblem, and they need to show howthey’re going to address the problem....They need to identify the stakeholdersthey’re going to target and why, and the

16 THE CHALLENGE OF ASSESSING POLICY AND ADVOCACY ACTIV IT I ES

Page 19: The Challenge of Assessing Policy and Advocacy … · Guiding Principles for Policy Change Evaluation 12 ... have been crafting new evaluation ... practiced methodology or widely

contextual features that may challengethem and how they’re going to addressthose.” This knowledge provides thestructure for both planning andevaluation. Evaluation experts andseveral foundation staff members felt that too many policy projects and theirevaluations are hampered by inadequatetime devoted to understanding theenvironment in which people will work and nebulous thinking about the strategies required to make changehappen. “When people undertake theseinitiatives, they start out with rather fuzzyideas.... It may be the first issue is not theevaluation but clarifying upfront whatpeople are proposing to do,” one expertnotes. Said another, “Time and again,there’s a poor understanding of threethings: the context you’re working in,how change happens and is sustainedover time, and power dynamics. Absentthat information, poor planning results.”

Stakeholders can begin to understandtheir policy environment through aprocess of research and informationgathering. For a small project, this mayinvolve something as simple as gaugingcommunity interest and reviewingprevious advocacy efforts on this issue.For larger projects or initiatives, moredetailed research is probably appropriate.Depending on the project and the playersinvolved, the foundation may even wantto commission research papers, literaturereviews, focus groups or opinion polls.Furthermore, for those less experienced inthe policy arena, research can help themcomprehend the complexities involved intypical advocacy, policy change and social

change processes. For more experiencedadvocates, who may already have acomprehensive understanding of theirpolicy environment and the latest research,it can be very useful at this point toarticulate and re-examine the assumptionsthey hold about the changes that areneeded and approaches to change thatwill work best in the current environment.

Understanding the Policy, Social Change and Advocacy ProcessesFor those foundation staff and granteesless experienced in the policy arena,understanding theenvironment involves firstunderstanding the basicelements or evolution ofadvocacy, policy change and social change processes.Funders and grantees withless experience in policy andadvocacy work often strugglewith understanding theprocess and where they canplay a part. Typically, theirdefinition of policy andadvocacy work is too narrow, focusingonly on adoption or passage of new policiesand overlooking the work building up topolicy change and the implementation ofthe policy once passed or adopted. Andas one foundation staff member stated, itis important to “get people not to focusjust on legislative solutions, but to identifylocal and regional advocacy opportunities,identify potential partners and allies.” Or asanother puts it, “We need to have a broadfocus on the continuum of activities thatgo into the policy world.”

17PART I I : A PROSPECTIVE APPROACH TO EVALUATION

Time and again, there’s

a poor understanding of

three things: the context

you’re working in, how

change happens and is

sustained over time,

and power dynamics.

Absent that information,

poor planning results.

Page 20: The Challenge of Assessing Policy and Advocacy … · Guiding Principles for Policy Change Evaluation 12 ... have been crafting new evaluation ... practiced methodology or widely

Bringing grantees that have traditionallyengaged in service provision into policyand advocacy work poses additionalchallenges. One foundation staff memberstates, “There are opportunities for policyand advocacy in every grant. [Programofficers working with service grantees]couldn’t always see that... and then theyneed help in getting their grantees to seethat.” Learning the processes of advocacy,policy change and social change helpsprogram officers and grantees understandthe potential opportunities available tothem to engage in policy and advocacywork. Furthermore, once grantees thathave traditionally done more service-oriented work get into policy work, theynote that they often struggle with how tocharacterize their new work within apolicy and advocacy framework. In thepast, they have laid out their service workin a deliverables framework, specificallystating what will be accomplished bywhen. This system often does not workfor policy activities.

Conceptual FrameworksBased upon years of experience, researchersand practitioners have developedconceptual models for understanding thecomplex process involved in advocacy,policy change and social change. Thesemodels are simply ways of systematizingand organizing the typical processes andcomplex details of advocacy, policy changeand social change, in the same way anorganizational chart systematizes theinternal relationships in a business or a city map lays out how the streets are aligned. These models are usefulbecause they:

! Provide a common language among all parties and expand the concept ofpolicy work beyond a narrow focus onlegislative change.

! Assist program planners in identifyingthe numerous levers they can use toeffect change. In particular, they helpthose with less understanding of thepolicy world (direct service granteesand some foundation program officers)grasp the complexity of the worldthey’re working in and understand how they might best contribute topolicy change.

! Help planners understand the typicalpathways and timeframes of change to set realistic expectations.

! Determine the process and goals for evaluation, including helpingbrainstorm potential strategies,activities, outcomes and benchmarks.

! Facilitate comparison across grants andinitiatives and sum up impact acrossgrants at the cluster or program level.Frameworks can be useful tools forunderstanding and aligning differentprojects when evaluating at a cluster,program or foundation level.

The three conceptual models presentedbelow each describe one particular aspectof the universe of social change andpolicy change. As seen in Figure 1, theoutside square represents social change,which encompasses both policy changeand advocacy, but is also broader thaneither of those. The middle squarerepresents policy change, of which theinner square, advocacy, is a component.

18 THE CHALLENGE OF ASSESSING POLICY AND ADVOCACY ACTIV IT I ES

Page 21: The Challenge of Assessing Policy and Advocacy … · Guiding Principles for Policy Change Evaluation 12 ... have been crafting new evaluation ... practiced methodology or widely

Because each of these models has adifferent focus, thinking through morethan one of them can provide a broaderunderstanding of the environment.

These three conceptual models arehighlighted because they have thefollowing characteristics:

! Each describes a different aspect of thesocial change/policy change process.

! They all provide a complex understandingof the world of policy change, gettingbeyond the simple “lobby for bill ‘A’”notion of policy work.

! They are simple enough to beunderstandable to a wide range of players.

Social Change Model

The broadest level is a social changeschema, which depicts how individualsand groups create large-scale change insociety.4 Policy change is viewed as justone piece of a larger strategy for change.Other components of social changeinclude changes in the private sector,

expansion and investment in civil society and democracy (e.g., buildinginfrastructure groups, involving andeducating the public), and creating realmaterial change in individual lives.

Policy Change Model

The policy change model focuses on thepolicy arena and presents the processthrough which ideas are taken up, weighedand decided upon in this arena.5 It outlinesa policy environment that doesn’t operatein a linear fashion and that often involvesmuch time preparing for a short window of opportunity for policy change. Thebasic process for policy change includes:

1. Setting the agenda for what issues areto be discussed;

2. Specifying alternatives from which apolicy choice is to be made;

3. Making an authoritative choice amongthose specified alternatives, as in a legislativevote or executive decision; and

4. Implementing the decision.

But the linearity of these four steps beliesthe fact that at any point in time, thereare three streams of processes occurringsimultaneously and independently:recognition and definition of problems,creation of policy proposals and shifts in politics.

! Problems are just conditions in theworld that attain prominence and are thus defined as problems. Thisrecognition and definition occursthrough the development andmonitoring of indicators (e.g., rates

19PART I I : A PROSPECTIVE APPROACH TO EVALUATION

4 Based on IDR framework, outlined in Chapman, J. and Wameyo, B., Monitoring and Evaluating Advocacy, A Scoping Study. 2001.5 Based on Kingdon, J. Agendas, Alternatives and Public Policies. HarperCollins: New York. 1995.

SOCIAL CHANGE

POLICY CHANGE

ADVOCACY

Figure 1. Model of Framework Relationships

Page 22: The Challenge of Assessing Policy and Advocacy … · Guiding Principles for Policy Change Evaluation 12 ... have been crafting new evaluation ... practiced methodology or widely

6 Kingdon, J. Agendas, Alternatives and Public Policies. p. 201.

of uninsured Californians); occurrenceof a focusing event—a disaster, crisis,personal experience or powerful symbolthat draws attention to some conditionsmore than others; or feedback (e.g., complaints from constituents).

! Creation of policy proposals happensboth inside and outside government,and a proposal is likely to survive andendure in the policy stream if it seemslogical, rational and feasible, and if itlooks like it can attract political support.

! Shifts in politics occur because of thingslike political turnover, campaigns byinterest groups, developments in theeconomic environment or changes in the mood of the population.

It is when these three streams cometogether, in what is termed a “policywindow,” that policy change happens.

The separate streams of problems, policiesand politics each have lives of their own.Problems are recognized and definedaccording to processes that are differentfrom the ways policies are developed orpolitical events unfold. Policy proposals are developed according to their ownincentives and selection criteria, whetheror not they are solutions to problems orresponsive to political considerations.Political events flow along on their own schedule and according to their own rules, whether or not they are related to problems or proposals.

But there comes a time when the threestreams are joined. A pressing problemdemands attention, for instance, and apolicy proposal is coupled to the problem

as its solution. Or an event in the politicalstream, such as a change of administration,calls for different directions. At that point,proposals that fit with that political event,such as initiatives that fit with a newadministration’s philosophy, come to the fore and are coupled with the ripepolitical climate. Similarly, problems that fit are highlighted, and others are neglected.6

Thus, the policy process is at its core anirrational and nonlinear process, whereadvocates’ work is often building theirown capacity to move quickly when apolicy window opens. This way ofthinking about the policy process fits well with how several program officersdescribed how their foundation and itsgrantees engage in policy work. Said one, “While we are trying to make things happen in the policy arena,right now most often we are trying to

respond to those opportunities that arise. Our challenge: how do we get granteeorganizations prepared so that they canengage when an opportunity arises? Thisis different than going straight to advocacyor legal organizations and giving themconcrete objectives—like changing acertain policy or enforcing a regulation.”

One specific challenge highlighted by thismodel is that advocates must identify andenter into not just any window, but theright window. They are challenged tostay true to their own goals and act onwhat the research identifies as the need,rather than being reactive and opportunisticin a way that pulls the work away fromthe organization’s mission.

20 THE CHALLENGE OF ASSESSING POLICY AND ADVOCACY ACTIV IT I ES

Page 23: The Challenge of Assessing Policy and Advocacy … · Guiding Principles for Policy Change Evaluation 12 ... have been crafting new evaluation ... practiced methodology or widely

Advocacy Model

The advocacy model differs from the othertwo models in that it describes a tactic forsocial or policy change, rather than thechange itself. Furthermore, the modeldetailed below is one developed for publichealth advocacy.7 General models, ormodels covering other issue areas, mayframe the advocacy process differently.The public health advocacy modeloutlines a process that starts withinformation gathering and dissemination,then moves to strategy development andfinally into action. Each step in theprocess builds on the previous ones toensure that the campaign develops well.“For example, motor vehicle injuryprevention relies on public healthsurveillance of fatalities and research oninjuries and vehicles (at the informationstage). That information is used toidentify new or ongoing obstacles tocontinued declines in motor vehicleinjury rates and also means to overcomethese obstacles (at the strategy stage).Legislative lobbyists, staffers, and othersthen attempt (at the action stage) to alterpolicy by, for example, lowering illegalblood alcohol concentration levels andchanging brake, air bag, and car seatperformance standards.” 8

While the steps in the advocacy processare laid out linearly, in practice they often all occur simultaneously. Anotherimportant distinction in the framework isthat it notes that different skills are requiredat different steps in the process. As it israre for one organization to be adept at all these different skills, advocacy

generally necessitates collaborationamongst stakeholders, each playingdifferent key roles.

Step 2: Develop aTheory of ChangeAfter researching and understanding theenvironment grantees are working in andpotential ways to effect change, the nextstep is to articulate how a grantee orinitiative leader expects change to occurand how they expect their activities tocontribute to that change happening.Nearly half of experts interviewed stressedthat developing a theory of change isessential in evaluating policy work.

The “Theory of Change” concept wasoriginally developed to guide the processof nonprofit program development.Later, theory of change was also found tobe very valuable for program monitoringand evaluation. Originally, a very specificprocess for nonprofits, the idea of theoryof change has mutated and expanded overtime. Now its usage is so widespread andvaried, the words themselves have amultitude of meanings. For some, theoryof change is a rigid, step-by-step processthat requires hours of time to develop. Forsome it is simply a one-sentence descriptionof a program’s activities and goals. Nomatter how it is defined, at its heart, atheory of change lays out what specific changesthe group wants to see in the world, and howand why a group 9 expects its actions to lead tothose changes. The important elements aredefining very clearly the end goals of theprogram and hypothesizing the strategiesthe group is using to move toward

21PART I I : A PROSPECTIVE APPROACH TO EVALUATION

7 Based on Christoffel, K., “Public Health Advocacy: Process and Product.” American Journal of Public Health.8 Christoffel, p.724.9 The group may be a single organization, a group of individuals, or a collaboration of organizations.

Page 24: The Challenge of Assessing Policy and Advocacy … · Guiding Principles for Policy Change Evaluation 12 ... have been crafting new evaluation ... practiced methodology or widely

10 Snowden, A. “Evaluating Philanthropic Support of Public Policy Advocacy: A Resource for Funders.” p. 11.

achieving those goals. Evaluation expertsurge those interested in the theory ofchange idea not to get too caught up inthe terminology and focus more on thebasic concepts underneath.

A theory of change, no matter what it is officially called, is central toprospective policy change evaluation. In any prospective, forward-lookingevaluation, a program’s theory guides the evaluation plan. Reciprocally, the evaluation provides feedback to aprogram as it evolves; itbecomes a key resourcein program refinement.One foundation staffmember notes thisrelationship in saying,“if we don’t know wherewe want to go and howwe are planning to getthere, it is hard toevaluate our work.”

Developing a theory of change forcesclarity of thought about the path from a program’s activities to its outcomes. As one evaluation expert states, “A theoryof change makes the evaluation effortmore transparent and efficient.... It breaksup the evaluation into both processimplementation and outcomes. It alsoforces you to be specific about what kindof change you want in what period oftime.” The focus that a theory of changeputs on outcomes or long-term change iscrucial. A program officer from anotherfoundation notes that, “Without a theoryof change, use of indicators will lead toactivity-driven monitoring.” 10 Finally,

it helps all players develop a commonunderstanding about where they aretrying to go and how they plan to getthere—be it a single grantee and programofficer or members of a multi-organizationcoalition or strategic initiative.

Practical Considerations inDeveloping a Theory of ChangeEvaluation experts suggest that insuccessfully developing a theory of change for policy change evaluation, the following issues should be considered:

1. Get Out ofEvaluation-SpeakOften the language ofevaluation is dauntingand mystifying to those who are notformal evaluators.Furthermore, manynonprofits have a fearof or distaste for formalevaluation processes

and are reluctant to engage, even if theywould welcome receiving useful informationabout the efficacy of their work. They oftensay the process distracts them from theirwork and provides them little that is useful.

Therefore, making a theory of changeprocess accessible and relevant to granteesis essential. The Women’s FundingNetwork has structured its grantee reportsaround questions that get at key conceptsand walk grantees through outlining theirtheory of change in their grant reportswithout using technical terms. Keyquestions in the Women’s FundingNetwork’s Making the Case tool include:

22 THE CHALLENGE OF ASSESSING POLICY AND ADVOCACY ACTIV IT I ES

At its heart, a theory

of change lays out

what specific changes

the group wants to see

in the world, and how

and why a group

expects its actions to

lead to those changes.

Page 25: The Challenge of Assessing Policy and Advocacy … · Guiding Principles for Policy Change Evaluation 12 ... have been crafting new evaluation ... practiced methodology or widely

! What is the situation you want to change?

! What is the change you want to produce?

! What strategies are you using to makethat happen?

! What helps you accelerate your efforts?

! What gets in your way or inhibits your progress?

These questions are part of an interactiveonline tool that shows a graphic of asmall town as a metaphor that may beparticularly appealing to less sophisticatedgrantees. The situation needing changeis represented as a collection of arrowsimpinging on the town. The change thegrantee is looking to make is symbolizedas a shield protecting the town. Individualstrategies are represented as bolstering theshield. This format avoids using evaluationlanguage and its graphical layout is visuallyappealing and understandable to thosenot familiar with evaluation. The lack of rigidity in the format also allows themodel to account for the complexity and non-linearity of advocacy work.

2. Provide Participants with Good ResearchAs noted above, research is crucial at thestart of any policy change evaluation. Inits work on theory of change, the AspenInstitute Roundtable states, “The processwill go more smoothly and produce betterresults if stakeholders and facilitatorshave access to information that allowsthem to draw on the existing body ofresearch and current literature from arange of domains and disciplines, and to

think more systematically about what itwill take to promote and sustain thechanges they want to bring about.”However, the amount and type of researchthat a strategic initiative should engage in is different from what an individualgrantee might do toprepare for creating atheory of change. Aninitiative might spendmany hours gathering dataon its subject area, goingover existing research, and possiblycommissioning additional research. Onthe other hand, a grantee may just needto gather community input and reviewprevious actions, being careful to assessthose actions with a fresh eye anda clear understanding of both the latestresearch and the current environment.

3. Provide Enough Flexibility to Adaptthe Model Over TimeThe continual shift in environment andstrategies is a hallmark of advocacy work.A rigorously detailed theory of change forpolicy projects will have to be modifiedfrequently, requiring an unsustainableamount of time and energy. Evaluationexperts recommend developing a moreconceptual, and therefore, more flexibletheory of change for policy projects. The theory of change created in theKellogg Foundation’s Devolution Initiativewas set up with this challenge in mind.11

Evaluators designed the basic theory ofchange to be broad and conceptual,allowing the specifics of the program to change over time while not impeding the evaluation in any significant way.For example, new organizations joined

23PART I I : A PROSPECTIVE APPROACH TO EVALUATION

Without a theory of change,

use of indicators will lead to

activity-driven monitoring.

11 Coffman, J., “Simplifying Complex Initiative Evaluation.” The Evaluation Exchange. 1999.

Page 26: The Challenge of Assessing Policy and Advocacy … · Guiding Principles for Policy Change Evaluation 12 ... have been crafting new evaluation ... practiced methodology or widely

the initiative, and new objectives wereconstructed without major revision of the theory.

4. Move Beyond the Basic Manuals for Policy Change EvaluationExisting evaluation manuals providegrounding in the basics of developing atheory of change.12 However, foundationswith experience helping policy, advocacyand social change groups develop a theoryof change found that the traditionalframework advocated by these manualswas too rigid and linear to capture thefluidity of policy and advocacy work.

The Liberty Hill Foundation convened agroup of grantees to develop a frameworkfor evaluating policy work. The groupbegan its effort using the formal logic modelprocess developed for service-orientednonprofits by the United Way. They soonfound that the model was not appropriatefor nonprofits engaged in advocacy orother broad social change work andended up adapting it over 10 sessions.

The Annie E. Casey Foundation hasdeveloped a guidebook for the evaluationof its grants that uses a theory of change.In developing a theory of change, oneexercise the guidebook recommends iscreating a simple “so that” chain. The goalof this exercise is to link program strategywith long-term outcomes. The chain beginswith a program’s strategy. The first step isto answer, “We’re engaging in this strategyso that...” The chain then moves fromshort-term outcomes to longer-termoutcomes. For example, a grantee might:

Increase media coverage on the lack of health insurance for children

so thatpublic awareness increases

so thatpolicymakers increase their

knowledge and interestso that

policies changeso that

more children have health insurance.

The exercise works well when participantshave a good vision of what they desiretheir long-term outcomes to be and whattheir current strategies are. Participantsthen can either work forward from theirstrategies toward their long-term outcomesor backward from outcomes to strategies.Approaching from either direction will allow the participants to clarify the logic that links their work to their desired outcomes.

5. Choose a Process Appropriate for the LevelAs stated above, the amount and level ofresearch needed to develop a theory ofchange differs based on whether the theoryis for an individual grant, an initiative or an entire foundation program. Thisdifferentiation holds true throughout thetheory of change development process.The efforts that an initiative undertakesare different from what an individualgrantee should undertake. As seen in theDevolution Initiative example mentionedearlier, an initiative may need a moreconceptual model that can be adapted todifferent groups and over time. As well,

24 THE CHALLENGE OF ASSESSING POLICY AND ADVOCACY ACTIV IT I ES

12 There are many good resources and guide books on developing a theory of change. Both the Aspen Institute and the Kellogg Foundation have done extensive research on theory of change and provide useful tools on their Web sites. http://www.theoryofchange.org or http://www.wkkf.org. As a starting place to understand the basic concepts for developing a theory of change, these resources are very useful.

Page 27: The Challenge of Assessing Policy and Advocacy … · Guiding Principles for Policy Change Evaluation 12 ... have been crafting new evaluation ... practiced methodology or widely

the logical steps in the theory may needto be backed up by more solid evidence.Theories developed for individual granteescan be both more specific to their work(less conceptual) and also less detailed in evidentiary support.

Creating a theory of change for anindividual grant poses unique challenges,especially as it relates to a funder’s overallgoal. In creating a theory of change foran individual grant, it will be relevant to understand the funder’s goals andstrategies in order for the grantee to seehow it fits into that plan and the largerenvironment—that while an individualone-year project may be small, there isvalue in the contribution to a largereffort. As one evaluation expert states,“Foundations need to express their owntheory of change and chain of outcomesand express to grantees where they fitinto that chain. A foundation’s work is apainting, and the grantees are the brushstrokes. Each brush stroke needs to beapplied at a particular time in a particularway. If you have 1,000 people applyingbrush strokes, each has to know whatthey’re a part of.” It allows foundationsto make it clear that grantees need notposition their project as somethingdifferent or depart from their originalmission just in order to fit all of thefoundation’s goals.

6. Provide Grantees and ProgramOfficers with SupportMany program officers and grantees needadvice or technical assistance to developa good theory of change and associatedbenchmarks since most have not been

trained in program evaluation. Forexample, with an individual project, afunder might review a proposed theory of change and set of benchmarks or join in on some conference calls. For amulti-organization initiative, the groupmight engage a facilitator to help themthrough the process. One evaluationexpert who has conducted several policy change evaluations suggests, “The challenge in theory of change is not having a long, drawn-out process. We always conduct a facilitated process.If you want grantees to create a theory of change on their own, thatseems very challenging. I don’t know how that would work.” The LibertyHill Foundation has providedthis support to a cohort ofgrantees through a peerlearning project that includedgroup training sessions andsome one-on-one consultationswith an evaluation expert.

Step 3: Define Benchmarksand IndicatorsGenerally, the policy change goals in atheory of change are long-term and cantake many years. Therefore, developingrelevant benchmarks to track progressalong the way is vital to an effective and useful policy change evaluation. A benchmark is a standard used tomeasure the progress of a project.Defining benchmarks at the beginning of an advocacy effort helps both fundersand grantees agree upon ways to assess

25PART I I : A PROSPECTIVE APPROACH TO EVALUATION

“In creating a theory

of change for an

individual grant, it

will be relevant to

understand the funder’s

program-level change

goals and strategies

and how the grantee

fits into that plan.”

Page 28: The Challenge of Assessing Policy and Advocacy … · Guiding Principles for Policy Change Evaluation 12 ... have been crafting new evaluation ... practiced methodology or widely

the level of progress towards achievingthe ultimate policy goal. They serve likemilestones on a road trip to tell travelerswhen they are getting closer to theirdestination—and when they have veeredoff course. Indicators operationalizebenchmarks, in that they define the data used to measure the benchmark in the real world.

Process Versus Outcomes Indicators Experts and the research literaturedistinguish among several types ofindicators. One key distinction is between process and outcomesindicators. Process indicators refer to measurement of an organization’sactivities or efforts to make changehappen. Examples include number ofmeetings held or educational materialsdeveloped and distributed. Outcomesindicators refer to a measurement ofchange that occurred, ideally due in part to an organization’s efforts.Examples include quantitative orqualitative measures of strengthenedpartnerships that resulted from holding a meeting and measures of changedattitudes after reading educationalmaterial. Generally, process indicators lie largely within an organization’scontrol, whereas outcomes indicators are more difficult to attribute to aparticular organization’s work.

To date, many funders and grantees saythey have emphasized process indicatorsover outcomes indicators to monitorpolicy and advocacy grant activities.Staff without significant policy experience,in particular, said relevant process

indicators are easier to identify. Moreover,they are easier to quantify and collectdata on because they relate to activities,which are under the grantee’s control.Many of the grantees interviewed as partof the stakeholder assessment also saidthey found process indicators most usefulbecause they provide concrete feedbackrelevant to improving their organization’soperations. However, a few grantees, whohave significant expertise and experience in advocacy work, described processindicators as “widget counting.”

While process indicators are a useful tool in grant monitoring, they do notdemonstrate that an organization’s workhas made any impact on the policyenvironment or advanced the organization’scause. Evaluation experts, as well asfunders and grantees, emphasized theneed to develop outcomes indicators thatwould demonstrate the impact of anorganization’s work. As one programofficer leading a policy-oriented initiativeput it, “We need to teach program officershow to monitor for change...not justmonitor activities.” Table 1 highlightsthe difference between a process indicatorand a related outcome indicator thatmight logically follow from the process.Foundations’ grant practices caninadvertently steer the focus towardprocess indicators, because manyfoundations require evaluation of a timeperiod too short to measure real policychange (outcomes). Recognizing and acknowledging this issue would make the shift to outcomes indicatorseasier for everyone.

26 THE CHALLENGE OF ASSESSING POLICY AND ADVOCACY ACTIV IT I ES

Page 29: The Challenge of Assessing Policy and Advocacy … · Guiding Principles for Policy Change Evaluation 12 ... have been crafting new evaluation ... practiced methodology or widely

However, it is often challenging toconceptualize relevant outcomes thatcould show change for policy andadvocacy projects, especially to showinterim progress short of the actual target policy change or ultimate goal of the advocacy. Grantees and staff need an expanded way to conceptualizethe policy and advocacy process—tounderstand how to talk about the stepsthat build up to and contribute to policychange. Traditionally, evaluations fordirect services have broken down outcomesin a linear fashion. Because policy and

advocacy work is rarely linear, thesecategories aren’t always useful in identifyingoutcomes and associated benchmarks.

In its book, Investing in Change: A Funder’sGuide to Supporting Advocacy, the Alliancefor Justice makes a useful distinction between“progress” and “outcomes benchmarks.” 13

Outcomes benchmarks demonstratesuccess related to the ultimate goal andusually take many years to achieve.Progress benchmarks are concrete stepstowards the advocacy goal. The Allianceframework also identifies two types of

27PART I I : A PROSPECTIVE APPROACH TO EVALUATION

13 Alliance for Justice. Investing in Change: A Funder’s Guide to Supporting Advocacy, p. 37.

Table 1. Process and Outcomes Indicators

Process Indicators(what we did)

Outcomes Indicators(what change occurred)

Number of meetings organized

Number of flyers mailed

Number of people on mailing list

Number of officials contacted

Number of press releases sent

Prepare amicus brief for a court case

Testify at a hearing

Increase in proportion of community members exposed to the particular policy issue

Increased awareness of issue, as measured in public opinion polls

Increase in the number of people using organization’s Web site to send emails to elected officials

Increase in number of elected officials agreeing to co-sponsor a bill

Number of times organization is quoted in the newspaper Or Organization’sdefinition of problem incorporated intoannouncement of a hearing

Material from amicus brief incorporated into judge’s rulings

Organization’s statistics used in formal meeting summary

Page 30: The Challenge of Assessing Policy and Advocacy … · Guiding Principles for Policy Change Evaluation 12 ... have been crafting new evaluation ... practiced methodology or widely

progress benchmarks: activitiesaccomplished and incremental resultsobtained. Activities accomplished (suchas the submission of a letter to a legislatoror to a newspaper) is analogous to processbenchmarks. Incremental results obtainedrefers to interim accomplishments requiredto get a policy change, such as holding apublic hearing, preparing draft legislationor getting people to introduce a bill.

Capacity-BuildingBenchmarksEvaluation experts andfunders with significantexperience in policy and advocacy work all emphasized theimportance of identifyingboth capacity buildingand policy changebenchmarks. Capacity benchmarksmeasure the extent to which anorganization has strengthened its abilityto engage in policy and advocacy work. Examples include developingrelationships with elected officials andregulators, increasing in the number of active participants in its action alerts network, cultivating partnershipswith other advocacy groups, buildingdatabanks and increasing policy analysisskills. Capacity-building goals can haveboth process or activity indicators as wellas outcomes indicators.

These capacity outcomes are importantmarkers of long-term progress for both thefunder and its grantees. They indicategrowth in an asset that can be applied toother issues and future campaigns. Said

one program officer, “We need to take the focus off policy change and encouragemore people to think about how they can be players.” Said another with a long history in policy work, “Funders and advocacy organizations should workon building bases so that when the policyenvironment changes due to budget,political will, administration changes—there will be people in place who can acton those opportunities. Our challenge is

to get grantee organizationsprepared to so that theycan engage when anopportunity arises.”

Moreover, it is moreappropriate to holdgrantees accountable forcapacity-building goalsbecause they have greater

control over them. As described in theearlier section on the challenges inevaluating policy work, a grantee canimplement a campaign flawlessly and still fail to achieve its goal due toenvironmental changes beyond its control(such as key contacts being voted out ofoffice). However, a failed policy outcomecan still represent a capacity-buildingachievement. For example, sometimesadvocates are pleased to get a billintroduced, even when it fails, becausethe debate over the bill increases publicawareness on the issue.

Frameworks for Benchmark DevelopmentBenchmarks should grow out of and beselected to represent key milestones in anorganization’s or initiative’s theory of change.

28 THE CHALLENGE OF ASSESSING POLICY AND ADVOCACY ACTIV IT I ES

“We need to

teach program

officers how

to monitor for

change... not just

monitor activities.”

Page 31: The Challenge of Assessing Policy and Advocacy … · Guiding Principles for Policy Change Evaluation 12 ... have been crafting new evaluation ... practiced methodology or widely

However, many grantees and staff alsosaid it would be very helpful to have a listof potential indicators or benchmarks toconsult for ideas so they needn’t start witha blank slate. A number of groups havedeveloped frameworks for developingbenchmarks relevant to policy and advocacywork. These benchmark frameworks canprovide examples of activities, strategiesand types of outcomes associated with thepolicy process. In our literature review,we identified six useful frameworks fordeveloping benchmarks: ranging from asimple model developed by the LibertyHill Foundation to a framework andcompendium of associated examples fromthe Alliance for Justice. Appendix Dcompares key aspects of these frameworks.

Using an existing framework to guidebenchmark development has several keyadvantages. First, it allows funders andgrantees to build on the experiences ofothers and, hopefully, reduce the timeand effort required. Second, reviewing several frameworks highlights differentaspects of policy work—which can bothserve as a reminder of key strategies toconsider and expand people’s notion ofwhat constitutes policy work. Finally, employing one of these frameworks willmake it easier for foundation staff tocompare progress and sum up impact andlessons learned across grants because theywill be describing their outcomes usingcommon categories.

Each of the six frameworks listed belowhighlights somewhat different aspects ofpolicy work, and may be used in differentcontexts from the others.14 In some cases,

policy change is a primary focus, while in others, it is one of several socialchange categories. Some are mostrelevant to policy advocacy aroundspecific issues, while others focus on community level change.

! Collaborations that Count (primaryfocus on policy change, particularlycommunity-level change)

! Alliance for Justice (primary focus onpolicy change, most relevant to specificissue campaigns)

! Annie E. Casey Foundation (applicableto a range of social change strategies,particularly community-level change)

! Women’s Funding Network (applicableto a range of social change strategies,most relevant to specific issue campaigns)

! Liberty Hill Foundation (applicable toa range of social change strategies and a broad variety of projects)

! Action Aid (applicable to a range ofsocial change strategies, particularlycommunity-level change, and a broadvariety of projects)

Each framework has distinct strengths andweaknesses. For example, the Alliancefor Justice framework includes the mostextensive set of sample benchmarks.However, it seems to be more a collectionof typical advocacy activities rather thanbeing a coherent theory about howactivities lead to change, so it does notsuggest sequence or relationship amongthe different outcomes. In contrast, theWomen’s Funding Network (WFN)framework grows out of a theory about

29PART I I : A PROSPECTIVE APPROACH TO EVALUATION

14 Snowden (2004) has an appendix listing many useful examples of potential policy indicators.However, it is not organized around a well-conceived framework, so it was not included in the comparison of frameworks.

Page 32: The Challenge of Assessing Policy and Advocacy … · Guiding Principles for Policy Change Evaluation 12 ... have been crafting new evaluation ... practiced methodology or widely

key strategies that make social changehappen. It was developed throughextensive research and experienceworking with social change organizationsaround the country. Moreover, it is thebasis for an online reporting tool thatwalks grantees through developing atheory of change and then selectingbenchmarks. Using the tool across a setof grantees would help program officersidentify trends and sum up impact acrossgrantees. However, the WFN tool itselfhas far fewer examples of ways to measurethose outcomes than the Alliance forJustice or Annie E. Casey frameworks.

Four of the benchmark frameworksspecifically call out advocacy capacitybenchmarks (Alliance for Justice, ActionAid, Collaborations that Count andLiberty Hill). Capacity benchmarkscould easily be added to benchmarksdrawn from other frameworks.

Evaluation experts said that developingbenchmarks involves a process both ofadapting benchmarks from standardized

frameworks and creating some veryproject-specific indicators. The processbegins by identifying one or more of thebenchmark frameworks that seems thebest fit with the project’s conceptualmodel for change. For example, considera program to get local schools to ban junk food vending machines. Using theWFN framework, one might identify thefollowing benchmarks (see chart page 31).

Developing benchmarks that fit intothese standardized categories will make iteasier to compare progress among groups.For example if a foundation funded 10community coalitions working to banvending machines in schools and all ofthem had benchmarks in these fivecategories from WFN, a program officercould more easily review the granteereports and synthesize the collective progress of the group along these fivestrategies for social change.

After identifying outcomes benchmarks,grantors and grantees together could addcapacity benchmarks. They might include:

30 THE CHALLENGE OF ASSESSING POLICY AND ADVOCACY ACTIV IT I ES

An Alternate Approach to Creating BenchmarksAn alternative approach to using any of these frameworks is to grow a database of

benchmarks by using an online reporting tool to review benchmarks selected by

previous grantees. (See Impact Manager from B2P Software. http://www.b2p.com/)

Grantees using these databases establish benchmarks for their grant and monitor

themselves against these benchmarks. In the software program Impact Manager,

when new grantees begin the process of establishing benchmarks, they are offered

a list of all the previous benchmarks established by other grantees in an initiative

or program (or they can create their own). By aggregating this information across

many grantees and many foundations, the benchmark options increase dramatically.

Page 33: The Challenge of Assessing Policy and Advocacy … · Guiding Principles for Policy Change Evaluation 12 ... have been crafting new evaluation ... practiced methodology or widely

! Develop a contact at each school PTAin the community that is concernedabout the junk food issue;

! Develop a relationship with the schoolcafeteria workers’ union; or

! Acquire and learn how to use listservsoftware to manage an online actionalert network on this issue.

Because of the differences among theavailable benchmark frameworks and thewide range of activities that fall underpolicy and advocacy work, we hesitate to recommend that a foundation steer all grantees towards a single existing

framework, use multiple frameworks or tryto build one framework for all based onthese models. The diversity among differentfoundation grantees means that differentmodels will speak to different grantees.

Practical Considerationsin Selecting BenchmarksWhen program officers, grantees andevaluation experts provided practicaladvice on selecting benchmarks, theircomments revolved around four themes:

1. Balance Realistic and Aspirational GoalsSelecting appropriate benchmarkschallenges grantors and grantees to identifybenchmarks that are both achievable

31PART I I : A PROSPECTIVE APPROACH TO EVALUATION

Table 2. Example of Benchmark Development from Frameworks

Benchmark Category Project-specific Interim Benchmarks of Progress

Changing Definitions/Reframing

Community or Individual Behavior

Shifts in Critical Mass

Institutional Policy

Holding the Line

Change official purpose of vendingmachines to include providing nutritious food for students

Get 100 students to submit requests for healthy snack choices into the school suggestion box

Get four of seven school board members to make a motion to hold a hearing on the issue of vending machines in schools

School board passes a resolution banning sodas from being sold in school vending machines

Stopping vending machine lobby fromintroducing resolution to allow vendingmachines in junior high and high schools

Page 34: The Challenge of Assessing Policy and Advocacy … · Guiding Principles for Policy Change Evaluation 12 ... have been crafting new evaluation ... practiced methodology or widely

during the grant period and also meaningfulenough to show some level of real progressor change. As an example, one expertcited a two-year grant to a group trying to eliminate the death penalty in Texas.“Let’s be realistic. The possibility of thathappening is zero to negative 100. If youuse a chain of outcomes, there are a lot of things that need to happen to get toeliminating the death penalty. So anevaluation needs to look at thoseintermediate outcomes.”

Program officers and a number of granteesfelt that grantees themselves often aren’tvery realistic in setting benchmarks forpolicy projects. “Almost never will aproposal come in with outcomes orobjectives we can use,” said one programofficer. Grantees said they often felt they needed to promise ambitious goals,beyond what they knew was attainable, in order to win the grant. So long asfunding for policy advocacy work remainscompetitive, nonprofits will inevitablyfeel it necessary to promise somethingambitious and “sexy.”

Program officers also felt that most granteeshad difficulty articulating those intermediateoutcomes for policy projects. This is onereason that sample benchmarks can beuseful in generating ideas. Anothergrantee recommended that grantees andfunders make clear distinctions between“setting goals that you can fulfill versusthose that are aspirational.” Aspirationalgoals motivate people to stretch andremind them of the big picture, but usingthem to measure progress can set granteesup for failure.

A few grantees and program officersmentioned concerns about a naturalincentive to set benchmarks low, especiallyon process indicators, in order to ensurethey can meet and even exceed them.The incentive particularly comes intoplay when grantees feel that fundingrenewal decisions hinge on progresstowards benchmarks. Said one experiencedgrantee, “When I write grants, I am likelyto go for things I know I can accomplishbecause I don’t want to come back laterand say that I have failed.”

2. Consider Tolerance for RiskFunding policy change and advocacy isinherently a high-risk grant-making strategyin two ways: a) The risk of failure—atleast in the short-term—because policychange is so hard to achieve; and b) Therisk of being viewed as having a politicalagenda and drawing fire from politiciansand other influential actors who disagreewith those goals. As part of establishingbenchmarks of progress or indicators ofultimate “success,” an internal dialogueabout the foundation’s appetite for riskwould be useful to achieve clarity on itsexpectations. Failing to achieve a policychange does not necessarily mean a wastedinvestment if knowledge about how andwhy the program failed is effectivelycaptured, valued and—ideally—sharedwith others who can benefit.

Experts noted that if a foundation isinclined to take chances and places value on learning from mistakes, it has a different attitude toward craftingbenchmarks of progress than does afoundation more hesitant to be publicly

32 THE CHALLENGE OF ASSESSING POLICY AND ADVOCACY ACTIV IT I ES

Page 35: The Challenge of Assessing Policy and Advocacy … · Guiding Principles for Policy Change Evaluation 12 ... have been crafting new evaluation ... practiced methodology or widely

associated with failed endeavors. Morerisk-averse foundations will tend to setmore manageable and potentially moreprocess-oriented benchmarks, so there isless possibility that grantees fail to achievedesignated goals. Embracing risk willboth increase learning and providechallenges that could lead to moresignificant policy change.

3. Have Grantees and Funders DevelopBenchmarks TogetherEvaluation experts, grantees and foundationstaff stressed the value of working togetherto develop benchmarks—even though itcan take longer. The process ensures thatgrantees and program officers are alignedon their expectations of progress. Granteesalso are more invested in reporting onbenchmarks they have helped developand feel serve their needs. On a smallproject, this process might involve severalfocused telephone conversations or ahalf-day planning meeting between agrantee and program officer. For largerinitiatives, it may require several meetingswith all stakeholders. In such groups,program officers may benefit fromassistance from evaluation experts tomake the iterative process more useful.

4. Choose an Appropriate Number of BenchmarksAs to the ideal number for benchmarksfor a project, there is no consensus.Many evaluation experts emphasizedsimplicity and selectivity, opting for fouror five benchmarks. The appropriatenumber is somewhat dependent on thegrant. For a single organization with a single policy campaign, one or two

process indicators, two capacitybenchmarks and two-to-three interimoutcomes benchmarks may be sufficient.However, a strategic initiative withmultiple grantees will probably requiremore benchmarks in order for the programofficer to assess progress. For example,one expert conducting an evaluation of a multi-partner initiative employed 35 benchmarks. In such situations, this expert felt it was important for allstakeholders to see some benchmarks thatreflected their interests. However, theseserved as benchmarks for the entireinitiative. Most grantees were only beingasked to track a subset of these 35 mostrelevant of their work. This allowedindividual grantees to demonstrate howtheir work fit into the initiative’s theoryof change and how their work contributedto the progress of the initiative withoutholding them accountable for changesoutside their realm of influence.

5. Be FlexibleFinally, all stakeholders stressed theimportance of viewing benchmarks forpolicy projects with flexibility. This requiresflexibility both on the part of the funderand the grantee. For multiyear policygrants, prioritized activities and strategieswill need to evolve as the policyenvironment changes. Even in a one-year grant, an unexpected change in the policy environment can quicklyrender what was once seen as an easilyachievable interim outcome impossible.Or it may require a change in strategythat can make even process indicatorsirrelevant. One grantee provided thefollowing example, “We found out that

33PART I I : A PROSPECTIVE APPROACH TO EVALUATION

Page 36: The Challenge of Assessing Policy and Advocacy … · Guiding Principles for Policy Change Evaluation 12 ... have been crafting new evaluation ... practiced methodology or widely

the administration doesn’t pay anyattention at all to a particular guide. We had submitted an extremely specificobjective for our grant that said that wewould work with these agencies to getthem to revise the guide and disseminatethose revisions. Now we feel we have to follow through and complete it eventhough we know it won’t be useful.” For multiyear initiatives, it may benecessary to revisit benchmarks each year and revise as needed.

Step 4: Collect Data on BenchmarksOnce benchmarks are created, the nextstep is to develop methods to measurethem on an ongoing basis. Data collectionis often a challenging and tedious processfor grantees. It requires time, energy,money and organization that often feelsdistracting from the grantees’ “real” work.As one grantee states, “The tracking is really tough and it takes a lot ofpaperwork. That is not the strength of grassroots organizations.”

Grantees and evaluation expertsrecommended several strategies to assist foundations in helping granteescollect data:

1. Help Build Learning OrganizationsThe most crucial aspect of getting granteesto collect evaluation information is tomake sure they value learning in theirorganization, affirm several experts. Iflearning is valued, the organization willwant to ask questions, gather information,and think critically about how to improveits work. Foundations can support grantees

in becoming learning organizations byproviding support for grantees to buildtheir internal evaluation capacity. Yet, asone grantee states, “Most foundations arenot willing to do what it takes to addevaluation capacity. If they really wantto build evaluation capacity, they must dolonger-term grants and invest in ongoingmonitoring.” Grantees need the time,skills and money to collect and analyzedata, and then think about the implicationsfor their work. They also need help upfront investing in good tools. One granteenotes, “It is most useful to invest in gooddata collection tools, especially databases,but it’s hard to get the funder to support it.”

2. Keep It SimpleThe complexity of the process formeasuring benchmarks should becommensurate with the complexity of the project. If the framework or datacollection strategy gets too complicated,it won’t get used. The Liberty HillFoundation spent nine months workingwith grantees to design a tool to helpthem evaluate their policy and socialchange work. The length of time granteeswere involved in the process caused themto become disinterested and theirparticipation dwindled. Consequently,the foundation shifted its resources andthus left the grantees to implement theprocess with limited support. In the end,only one of the grantees chose to implementthe process in its entirety. A few otherschose particular elements to implementthat they found particularly useful to their organization.

34 THE CHALLENGE OF ASSESSING POLICY AND ADVOCACY ACTIV IT I ES

Page 37: The Challenge of Assessing Policy and Advocacy … · Guiding Principles for Policy Change Evaluation 12 ... have been crafting new evaluation ... practiced methodology or widely

3. Build on Grantees’ Existing Data CollectionPushing an organization to collect moredata is challenging if the traditions andculture of the organization work againstsystematic data collection, as is true inmost advocacy organizations. Evaluationexperts recommend trying to identify the types of data organizations currentlycollect and building indicators fromthere. Another approach is to add datacollection in grantee reports, a strategyemployed in the Packard Foundation’songoing advocacy effort to instituteuniversal preschool in California. The evaluator worked with the foundationto modify its grant reports to ask twoquestions specifically geared to theinitiative’s evaluation. These questionsdetailed all the outcomes that theinitiative had laid out in its theory ofchange and connected each with one ormore measurable indicators. Granteeswere then asked to report on any of theindicators they currently measured.

The Annie E. Casey Foundation alsoprovides enough flexibility to let grantees employ existing data whenpossible. The foundation has recentlyimplemented a process that designateskey benchmarks for a group of initiativegrantees, and for each benchmark it identifies several measurementmethods. Each outcome benchmarkincludes two or three options for datacollection, falling into categories such as surveys, observations, interviews, focus groups, logs, content analysis ordocument review. This approach allowsthe grantee to choose options that it

may be currently engaging in or couldimplement easily, while also learningabout other techniques it might choose to employ later on. The Casey Foundationdoes not yet have enough of a trackrecord with this approach to assess howthis data variation will affect the qualityof the evaluation—especially efforts tosum up across projects.

4. Draw on Examples Used in Other StudiesIn many cases, grantees need to developmeasurement strategies and data collectiontools from scratch. A useful resource fordeveloping measurement strategies isAnnie E. Casey’s “Practical Guide toDocumenting Influence and Leverage in Making Connections Communities.” The Casey guide includes some possiblebenchmark indicators, strategies on howto gather data to measure those indicators,and even sample data collection tools,although many may seem very specific to that grant program.

Overall, there will likely be differences in measurement needs for individualgrantees versus initiatives. Whileindividual grantee data collection mightbe simple and work within grantees’existing systems, more rigor and data are appropriate at the initiative level.This may mean supplementing datacollected by grantees with data collectedindependently by outside evaluators.

35PART I I : A PROSPECTIVE APPROACH TO EVALUATION

Page 38: The Challenge of Assessing Policy and Advocacy … · Guiding Principles for Policy Change Evaluation 12 ... have been crafting new evaluation ... practiced methodology or widely

Policy change is a long-termendeavor so emphasizing a

long-term perspective is key.

36 THE CHALLENGE OF ASSESSING POLICY AND ADVOCACY ACTIV IT I ES

Page 39: The Challenge of Assessing Policy and Advocacy … · Guiding Principles for Policy Change Evaluation 12 ... have been crafting new evaluation ... practiced methodology or widely

ConclusionIn order for foundations to better monitorand measure the impact of their grants andother activities directed towards policychange, this paper recommends anevaluation strategy based on our synthesisof the current thinking of grantees andfunders, as well as experts in evaluatingthis type of work. Drawing from theseinsights, we outline a promising approachto prospective evaluation of policy changeactivities. This approach breaks downinto four steps:

! Agree upon a conceptual model for thepolicy process under consideration.

! Articulate a theory about how and whythe activities of a given grantee, initiativeor foundation are expected to lead tothe ultimate policy change goal (oftencalled a “theory of change”).

! Use the “theory of change” as a frameworkto define measurable benchmarks forassessing both progress towards desiredpolicy change and building organizationalcapacity for advocacy in general.

! Collect data on benchmarks to monitorprogress and feed the data to granteesand foundation staff who can use theinformation to refine their efforts.

Also drawing from grantees’ and funders’insights, seven principles for policyevaluation emerged:

1. Expand the perception of the outcomesof policy work beyond state and federallegislative arenas.

2. Build evaluation frameworks around a theory about how a group’sactivities are expected to lead to its long-term outcomes.

3. Focus monitoring and impactassessment for most grantees andinitiatives on the steps that lay thegroundwork for and contribute to the policy change.

4. Include outcomes that involve building grantees’ capacity to become more effective advocates.

37CONCLUSION

Page 40: The Challenge of Assessing Policy and Advocacy … · Guiding Principles for Policy Change Evaluation 12 ... have been crafting new evaluation ... practiced methodology or widely

38 THE CHALLENGE OF ASSESSING POLICY AND ADVOCACY ACTIV IT I ES

5. Focus on the foundation’s and grantee’scontribution, not attribution.

6. Emphasize organizational learning asthe overarching goal of evaluation forboth the grantee and the foundation.

7. Build grantee capacity to conduct self-evaluation.

It is important to remember that policychange evaluation is an emerging field ofpractice and there are still many lessonsto be learned. Many elements of theapproach recommended in this paper areonly beginning to be implemented in thefield. Surely there will be new insights to be gained as evaluators and nonprofitshave more experience conductingevaluations of this nature.

Furthermore, this is the first report toattempt to think comprehensively aboutthe steps needed in an approach toprospective policy change evaluation.Although it is built on a strongfoundation—the stated needs offoundation stakeholders, advice fromevaluation experts and foundationcolleagues, and written documentation oftools, manuals and previous evaluations—the report will be strengthened and refinedthrough further review and discussion by policy and advocacy organizations,foundations and expert evaluators.

Increased attention to evaluation mayraise anxiety among some grantees.Nonprofits will be looking to funders toprovide evidence that they are interestedin evaluation for learning as well as for

monitoring for accountability and making decisions about renewing funding. Foundations have to be open to what is being learned. That meanswelcoming bad news—stories of strategiesthat failed—as well as success stories. It’s often the negative stories that hold the most opportunity for learning. Thisapproach also requires building granteeevaluation capacity—to articulate howtheir activities will lead to policy goalsand to collect data to show their progress.Without additional skills and funds tosupport these activities, grantees will not be able to deliver the quality of datafoundations desire. It’s important to note,however, that this increased granteecapacity will benefit both parties. Granteeswill become more reflective in their workand improve their programs. Funders willreceive more useful information fromgrantees that they can use to documentthe impact of their work to shape policy.

Policy change is a long-term endeavor, so emphasizing a long-term perspective to program staff, grantees and foundationboard members is key. Despite the currententhusiasm for business approaches thatfeature short-term metrics and impatienceif a program is not quickly “hitting itsnumbers,” there needs to be recognitionat the board and staff levels of just howdifficult policy and advocacy work is andhow long it takes. Even achieving someof the advocacy capacity building goalsidentified in this paper will take time.

The prospective approach outlined in thispaper can help make the long timeline for policy work more manageable.

Page 41: The Challenge of Assessing Policy and Advocacy … · Guiding Principles for Policy Change Evaluation 12 ... have been crafting new evaluation ... practiced methodology or widely

39CONCLUSION

Staying focused on the long term is hardwhen one can’t see any progress. Thisprospective approach to policy changeevaluation will allow funders and theirgrantees to conceptualize, document andcelebrate many successes in creatingbuilding blocks towards ultimate policychange goals. It also will highlight themany ways that a foundation’s work isincreasing the field’s capacity to advocatefor policies for years to come. Thisapproach will provide information thatcan make a foundation’s policy changework more concrete and progress moreobservable. A foundation will be able totell a compelling, evidence-based story forboard members and the public on how itsinvestments in policy change work aremaking a difference.

Page 42: The Challenge of Assessing Policy and Advocacy … · Guiding Principles for Policy Change Evaluation 12 ... have been crafting new evaluation ... practiced methodology or widely

Policy change evaluation is an

emerging field of practice

and there are still many

lessons to be learned.

THE CHALLENGE OF ASSESSING POLICY AND ADVOCACY ACTIV IT I ES40

Page 43: The Challenge of Assessing Policy and Advocacy … · Guiding Principles for Policy Change Evaluation 12 ... have been crafting new evaluation ... practiced methodology or widely

41APPENDIX A : METHODOLOGY

15 Organizational affiliations are accurate as of the date of research.

To gather information for this report,Blueprint Research & Design, Inc.engaged in two simultaneous processes:an internal stakeholder needs assessmentand an external scan of the field forexisting resources. For the needsassessment, the Blueprint teaminterviewed 42 stakeholders—grantees,staff, trustees and initiative evaluators.These interviews were focused onunderstanding current policy changeevaluation knowledge and practice as well as describing policy change evaluationneeds. The interviewees were:

Grantees:Judith Bell, PolicyLinkNora Benavides, National Farm Workers

Service Center, Inc.Alicia Bohlke, Mercy Medical Center

Merced FoundationEllen Braff-Guajardo, California Rural

Legal Assistance, Inc.Paul Chao, Asian American CoalitionDavid Chatfield, Pesticide Action Network

North America Regional CenterTim Curley, Hospital Council of Northern

and Central California

Donna Deutchman, ONEgenerationCaroline Farrell, Center on Race,

Poverty & the EnvironmentHarold Goldstein, California Center for

Public Health AdvocacyGilda Haas, Strategic Actions for

a Just EconomyKenneth Hecht, California Food Policy

Advocates Inc.Linton Joaquin, National Immigration

Law CenterStewart Kwoh, Asian Pacific American

Legal Center of Southern California, Inc.Larry Lavin, National Health Law

Program, Inc.Wendy Lazarus, The Children’s PartnershipMolly O’Shaughnessy, California Safe

Schools CoalitionJennifer Perry, Los Angeles County Foster

Youth Health and Education PassportCarolyn Reilly, Elder Law and AdvocacyCarole Shauffer, Youth Law CenterSusan Steinmetz, Center on Budget and

Policy PrioritiesCatherine Teare, Children NowAnthony Wright, Health Access FoundationEllen Wu, California Pan-Ethnic

Health Network

Appendix A: Methodology15

Page 44: The Challenge of Assessing Policy and Advocacy … · Guiding Principles for Policy Change Evaluation 12 ... have been crafting new evaluation ... practiced methodology or widely

42 THE CHALLENGE OF ASSESSING POLICY AND ADVOCACY ACTIV IT I ES

The California Endowment Staff:Ignatius Bau, Director, Culturally

Competent Health SystemsLarry Gonzales, Senior Program

Officer, FresnoMario Gutierrez, Director, Agricultural

Worker Health and Binational ProgramsTari Hanneman, Senior Program Associate,

Local Opportunities FundLaura Hogan, Director, Access to HealthIrene Ibarra, Executive Vice PresidentPeter Long, Senior Program Officer,

Children’s Coverage ProgramMarion Standish, Director, Community

Health and the Elimination of Health Disparities

Gwen Walden, Director, Community Conference and Resource Center

Barbara Webster-Hawkins, Senior ProgramOfficer, Sacramento

Joyce Ybarra, Senior Program Associate, Local Opportunities Fund

The California Endowment Directors:Arthur Chen, M.D.Fernando M. Torres-Gil, Ph.D.

Initiative Evaluators:Bill Beery, Partnership for the Public’s HealthClaire Brindis, Community Action to Fight

Asthma, Clinic Consortia Policy and Advocacy Program

Michael Cousineau, Children’s Coverage Program, Express Lane Eligibility

Mary Kreger, Community Action to Fight Asthma

Karen Linkins, Frequent Users of Health Services Initiative, Mental Health Initiative

Blueprint also interviewed 21 leaders inpolicy change and advocacy evaluation to

get their take on the state of the field andidentify promising evaluation strategiesand tools. Interviewees were identifiedby The Endowment and Tom David, oneof the field’s leading thinkers on thisissue. Blueprint then reviewed thepublished and unpublished reportsrecommended by these leaders, as well as a number of documentBlueprint obtained independently.

Experts:Deepak Bhargava, Center for

Community ChangePrudence Brown, Chapin Hall Center

for Children, University of ChicagoJennifer Chapman, Action AidWendy Chun-Hoon, Annie E.

Casey FoundationJulia Coffman, Harvard Family

Research ProjectPatrick Corvington, InnonetTom Dewar, Johns Hopkins UniversitySteve Fawcett, University of KansasAndy Gordon, Evans School, University

of WashingtonSusan Hoechstetter, Alliance for JusticeMike Laracy, Annie E. Casey FoundationLauren LeRoy, Grantmakers in HealthLaura Leviton, Robert Wood

Johnson FoundationRicardo Millet, Woods Fund of ChicagoAndy Mott, Community Learning

Project, NYUGary Nelson, Healthcare Georgia FoundationLina Paredes, Liberty Hill FoundationPatti Patrizi, Evaluation RoundtableCathy Schoen, Commonwealth FundKaren Walker, Public Private VenturesHeather Weiss, Harvard Family

Research Project

Page 45: The Challenge of Assessing Policy and Advocacy … · Guiding Principles for Policy Change Evaluation 12 ... have been crafting new evaluation ... practiced methodology or widely

43APPENDIX B: B IBL IOGRAPHY

Policy/AdvocacyReferences with SomeEvaluation InformationChristoffel, Katherine Kaufer.

“Public Health Advocacy: Process and Product.” American Journal of Public Health. 2000; 90: 722-726.

Kingdon, John W. Agendas, Alternatives, and Public Policies. HarperCollins College Publishers, 1995.

“Making Change Happen: Advocacyand Citizen Participation.” Meeting proceedings. Co-sponsored by ActionAidUSA, The Participation Group, Just Associates, the Asia Foundation, andthe Participation Group at the Instituteof Development Studies.

Miller, Valerie. “NGO and GrassrootsPolicy Influence: What is Success?”Just Associates. Institute forDevelopment Research.Vol. 11, No. 5, 1994.

“Mobilizing for Social Change.”Independent Sector Annual Conference,Closing Plenary. Discussion with:Angela Glover Blackwell, StephenHeintz, Wade Henderson. Chicago,November 9 2004.

Policy/AdvocacyExamples with SomeEvaluation Information“Building Bridges Between Policy and

People: Devolution in Practice.” The W. G. Kellogg Foundation.http://www.wkkf.org/Pubs/Devolution/Pub3641.pdf

Hertz, Judy. “Organizing for Change:Stories of Success.” Sponsored by TheWieboldt Foundation, The John D. andCatherine T. MacArthur Foundation,The Chicago Community Trust, TheWoods Fund of Chicago, The NewProspect Foundation, and The Mayerand Morris Kaplan Family Foundation.http://www.nfg.org/publications/Organizing_For_Change.pdf

Appendix B: Bibliography

Page 46: The Challenge of Assessing Policy and Advocacy … · Guiding Principles for Policy Change Evaluation 12 ... have been crafting new evaluation ... practiced methodology or widely

44 THE CHALLENGE OF ASSESSING POLICY AND ADVOCACY ACTIV IT I ES

Krehely, Jeff, Meaghan House, and EmilyKernan. “Axis of Ideology: ConservativeFoundations and Public Policy.”National Committee for ResponsivePhilanthropy, 2004.

Salmon, Charles T., L.A. Post, and RobinE. Christensen. “Mobilizing PublicWill for Social Change.”Communications Consortium Media Center, Michigan State University, June 2003.

Shetty, Salil. “ALPS: Accountability,Learning, and Planning System.”Actionaid, 2000.

Zald, Mayer N. “Making Change: Why Does the Social Sector NeedSocial Movements?” Stanford SocialInnovation Review, Summer 2004. http://www.ssireview.com

Policy/AdvocacyToolkits with SomeEvaluation Information“Investing in Change: A Funder’s Guide

to Supporting Advocacy.” Alliance forJustice. 2004.

Evaluation ReferencesBlair, Jill, and Fay Twersky. “Evaluating

Social Change: Meeting Summary.”Proceedings from a May 17, 2004convening of multiple foundations’staff. Prepared for the MargueriteCasey Foundation. http://www.informingchange.com/pub.htm

Brown, Prudence; Robert Chaskin, Ralph Hamilton, and Harold Richman.Toward Greater Effectiveness inCommunity Change: Challenges andResponses for Philanthropy. ChapinHall Center for Children, University of Chicago.

Chapman, Jennifer, and AmbokaWameyo. “Monitoring and EvaluatingAdvocacy: A Scoping Study.” ActionAid, January 2001.

Dewar, Thomas. “Better Ways toDescribe the Impact of CommunityOrganizing.” A Report for the MargueriteCasey Foundation, July 2004.

---. “A Guide to Evaluating Asset-BasedCommunity Development: Lessons,Challenges and Opportunities.”

Foster, Catherine Crystal. “Public Policy Continuum: Idea toImplementation.” Communication.

“Guidelines for Evaluating Non-ProfitCommunications Efforts.”Communications Consortium MediaCenter, 2004. http://www.mediaevaluationproject.org/Paper5.pdf

“Innovative Methodologies for Assessingthe Impact of Advocacy.” Action Aid Newsletter Update.

Morris, Angela Dean. “A Survey ofTheories of Change within Philanthropy.”A Thesis for Grand Valley StateUniversity School of PublicAdministration, 2004.

Page 47: The Challenge of Assessing Policy and Advocacy … · Guiding Principles for Policy Change Evaluation 12 ... have been crafting new evaluation ... practiced methodology or widely

45APPENDIX B: B IBL IOGRAPHY

Mott, Andrew. “Strengthening SocialChange through Assessment andOrganizational Learning.” Report onInterviews after Gray Rocks. CommunityLearning Project. http://www.communitylearningproject.org/Default.aspx

Mott, Andrew. “Strengthening SocialChange Through Assessment andOrganizational Learning.” Summary ofproceedings from the 2003 Gray RocksConference. Community LearningProject. http://www.communitylearningproject.org/docs/Gray%20Rocks%20Conference%20Report.pdf.

“Philanthropy Measures Up: WorldEconomic Forum Global LeadersTomorrow Benchmarking PhilanthropyReport.” World Economic Forum,Davos. January 28, 2003.

Puntenney, Deborah L. “MeasuringSocial Change Investments.” Women’s Funding Network, 2002.http://www.wfnet.org/documents/publications/dpuntenney-paper.pdf

“Returning Results: Planning andEvaluation at the Pew CharitableTrusts.” The PCT Planning andEvaluation Department., 2001.

Roche, Chris. “Impact Assessment forDevelopment Agencies: Learning toValue Change.” Oxfam DevelopmentGuidelines, Oxfam and Novib, UK, 1999.

Roche, Chris and A. Bush. “Assessing theimpact of advocacy work.” AppropriateTechnology, Vol. 24, No. 2, 1997.

Snowdon, Ashley. July 2004, “EvaluatingPhilanthropic Support of Public Policy Advocacy: A Resource forFunders.” Prepared for NorthernCalifornia Grantmakers.

“What is Success? Measuring and Evaluatingthe Impact of Social Justice Advocacyand Citizen Organizing Strategies.”Author unknown, derived in part fromValerie Miller, “NGOs and GrassrootsPolicy Influence: What is Success?”Institute for Development Research,Vol. 11, No. 5, 1994.

“What’s Evaluation Got to Do With It?”Meeting Report. FACT CommunityOrganizing Evaluation Project.

Evaluation ToolkitsCoffman, Julia. “Strategic Communications

Audits.” Communications ConsortiumMedia Center, 2004. http://www.mediaevaluationproject.org/WorkingPaper1.pdf

“Demonstration of ‘Making the Case: A Learning and Measurement Tool for Social Change.’” Women’s FundingNetwork. 24 March 2005.

“Making the Case: A Learning andMeasurement Tool for Social Change:A Guide for Women’s FundingNetwork Members.” Women’s Funding Network, 2004.

Puntenney, Deborah L. “Measuring SocialChange Investments AchievementVector.” Women’s Funding Network,2002. http://www.wfnet.org/documents/publications/dpuntenneyvector_model.pdf

Page 48: The Challenge of Assessing Policy and Advocacy … · Guiding Principles for Policy Change Evaluation 12 ... have been crafting new evaluation ... practiced methodology or widely

46 THE CHALLENGE OF ASSESSING POLICY AND ADVOCACY ACTIV IT I ES

Reisman, Jane and Anne Gienapp.“Theory of Change: A Practical Toolfor Action, Results and Learning.”Organizational Research Services.Prepared for the Annie E. CaseyFoundation, 2004. http://www.aecf.org/initiatives/mc/readingroom/documents/theoryofchangemanual.pdf

Reisman, Jane, Kasey Langley, SarahStachowiak, and Anne Gienapp. “A Practical Guide to DocumentingInfluence and Leverage in MakingConnections Communities.”Organizational Research Services.Prepared for the Annie E. CaseyFoundation, 2004. http://www.aecf.org/initiatives/mc/readingroom/documents/influenceleveragemanual.pdf

Evaluation ExamplesCoffman, Julia. “Lessons in Evaluating

Communications Campaigns: FiveCase Studies.” Harvard FamilyResearch Project, March 2003.

---. “Public Communication CampaignEvaluation: An Environmental Scan of Challenges, Criticism, Practice, andOpportunities.” Harvard FamilyResearch Project, Prepared for theCommunications Consortium MediaCenter, May 2002.

---. “Simplifying Complex InitiativeEvaluation.” The Evaluation Exchange.Harvard Family Research Project, 1999.v.5 n.2/3: 2-15.

David, Ross, and Antonella Mancini.“Transforming Development Practice:The Journey in the Quest to DevelopPlanning, Monitoring and EvaluationSystems that Facilitate (Rather thanHinder) Development.” ActionAid.

Dorfman, Lori, Joel Ervice, and KatieWoodruff. “Voices for Change: ATaxonomy of Public CommunicationsCampaigns and Their EvaluationChallenges.” Berkeley Media StudiesGroup. Prepared for the CommunicationsConsortium Media Center, Nov. 2002.

Duane, Timothy. CCLI Evaluation, 2003.

“A Framework for Planning & EvaluatingSocial Change.” The Liberty HillFoundation. 18 November 2002.

Gold, Eva, and Elaine Simon. “StrongNeighborhood, Strong Schools: TheIndicators Project on EducationOrganizing.” Research for Action, 2002.

Hoechstetter, Susan, and Benita Kline.“Making the Challenge: EvaluatingAdvocacy.” Alliance for Justice andLKM Philanthropic Services.

---. “Meeting the Challenge: EvaluatingAdvocacy.” An Evaluation of Mazon’sInitiative, “Strengthening Anti-HungerAdvocacy in California.” Alliance forJustice, 2003.

Lodestar Management / Research, Inc.“Liberty Hill Foundation: Developingan Evaluation Model for Social Change.”

Page 49: The Challenge of Assessing Policy and Advocacy … · Guiding Principles for Policy Change Evaluation 12 ... have been crafting new evaluation ... practiced methodology or widely

47APPENDIX B: B IBL IOGRAPHY

McKaughan, Molly. “Capturing Results:Lessons Learned from Seven Years of Reporting on Grants.” The RobertWood Johnson Foundation, 2003.

“Outreach Extensions: National “Legacy”Outreach Campaign Evaluation.”Applied Research and Consulting, 2002.

“Power, Participation, and State-BasedPolitics: An Evaluation of the FordFoundation’s Collaborations that CountInitiative.” Applied Research Center,April 2004. http://www.arc.org/Pages/pubs/collaborations.html

“Rainmakers or Troublemakers? The Impact of GIST on TANFReauthorization.” Prepared by the Annie E. Casey Foundation.

Speer, Paul W. et. al. “People MakingPublic Policy in California: The PICOCalifornia Project.” VanderbiltUniversity, May 2002.

Trenholm, Christopher. ExpandingCoverage to Children: The SantaClara County Children’s HealthInitiative. Mathematica PolicyResearch, Inc., June 2004.

Wallack, Lawrence, Amy Lee, and LianaWinett. “A Decade of Effort, A Worldof Difference: The Policy and PublicEducation Program of the CaliforniaYouth Violence Prevention Initiative.”School of Community Health, Collegeof Urban and Public Affairs, PortlandState University. Funded by a grant fromthe California Wellness Foundation, 2003.

Page 50: The Challenge of Assessing Policy and Advocacy … · Guiding Principles for Policy Change Evaluation 12 ... have been crafting new evaluation ... practiced methodology or widely

48 THE CHALLENGE OF ASSESSING POLICY AND ADVOCACY ACTIV IT I ES

Page 51: The Challenge of Assessing Policy and Advocacy … · Guiding Principles for Policy Change Evaluation 12 ... have been crafting new evaluation ... practiced methodology or widely

49APPENDIX C: EXAMPLES OF RETROSPECTIVE POLICY CHANGE EVALUATIONS

The California Youth ViolencePrevention Initiative, funded by TheCalifornia Wellness Foundation, engagedin policy research, public educationcampaigns and operation of a statewidepolicy center to reduce violence inCalifornia. It is valuable as an exampleof an evaluation employing a frameworkfor understanding policy change,including intermediate steps to actualpolicy change. Also, the evaluation usesmultiple methods, including interviews,media content analysis and archivaldocument review, to establish theinitiative’s impact.

Source: Wallack, Lawrence, Amy Lee,and Liana Winett. “A Decade of Effort,A World of Difference: The Policy and Public Education Program of theCalifornia Youth Violence PreventionInitiative.” School of CommunityHealth, College of Urban and PublicAffairs, Portland State University.

State Fiscal Analysis Initiative: Thismulti-funder initiative focused oncreating or supporting organizations inmultiple states to analyze state fiscalpolicies, modeled after the work of theCenter on Budget and Policy Priorities at the federal level. In 2003, the Ford

Foundation funded the OMG Center for Collaborative Learning to conduct an evaluation, which is not yet complete.This project is the most prominent policy change evaluation currently being undertaken, and the evaluationmethodology will likely be instructive forfuture policy change evaluation endeavors.

Source: Center on Budget and PolicyPriorities. http://www.cbpp.org/sfai.htm or Urvashi Vaid, Ford Foundation, (212) 573-5000.

Collaborations that Count, funded by theFord Foundation, promoted collaborationamongst policy and grassroots organizationsin 11 states. The evaluation, conductedby the Applied Research Center, began in the fifth of this six-year initiative. It isnotable for its attempt to sum up the workof a variety of distinct projects, taking placein various locations. Also, it providesanother framework for organizing outcomes.

Source: “Power, Participation, andState-Based Politics: An Evaluation ofthe Ford Foundation’s Collaborationsthat Count Initiative.” AppliedResearch Center, April 2004.http://www.arc.org/Pages/pubs/collaborations.html

Appendix C: Examples of Retrospective Policy Change Evaluations

Page 52: The Challenge of Assessing Policy and Advocacy … · Guiding Principles for Policy Change Evaluation 12 ... have been crafting new evaluation ... practiced methodology or widely

50 THE CHALLENGE OF ASSESSING POLICY AND ADVOCACY ACTIV IT I ES

Appendix D: Comparison of Frameworks to Guide BenchmarkDevelopment in Policy Change Evaluation

Organization Categories Focus

Liberty Hill

Annie E. Casey

Women’s FundingNetwork

Action Aid/Institute forDevelopment Research

Collaborations that Count

Alliance For Justice

campaign or community

campaign

campaign

community

community

campaign

! External—Social Change! Internal—Organizing and Capacity Building

! Impact Outcomes ! Influence Outcomes! Leverage Outcomes

Arena of Change:! Definitions/Reframing! Individual/Community Behavior! Shifts in Critical Mass/Engagement! Institutional Policy! Maintaining Current Position/Holding the Line

! Policy Change! Strengthening Civil Society and Building

Social Capital! Increasing Democracy

! Infrastructure Outcomes! Developmental Outcomes! Policy Outcomes

Changes:! Public Will! Visibility! Partnerships

! Funding and Resources! Policy and Regulation

Services Practices

! Outcomes! Progress Towards Goals! Capacity Building Efforts

Cross-Cutting Activities:! Policy Change! Constituency Involvement! Network Building

! Coalition Building! Mobilization! Media Advocacy

Page 53: The Challenge of Assessing Policy and Advocacy … · Guiding Principles for Policy Change Evaluation 12 ... have been crafting new evaluation ... practiced methodology or widely

51APPENDIX D: COMPAR ISON OF FRAMEWORKS TO GU IDE BENCHMARK DEVELOPMENT

Samples Strengths Drawbacks

N

Y

Y

N

Y

Y

No examples of benchmarks or strategies.

Focused on community improvement, with policy change as a strategy. Therefore,outcomes not organized in ways most relevant to policyprojects. Examples very specific to children and family issues.

No concept to capture capacity-building outcomes.Policy is viewed as a strategy for social change rather thanfocal point of process. Fewerexamples than other tools.

While this framework can provide guidance onbenchmarks, it includes no examples.

Difference between definition of infrastructure and develop-ment outcomes is fuzzy.

Not built on theory of how change happens sodoesn’t draw connections or suggest any order between the outcomes.

Simple to explain. Applicable in wide range of settings. Emphasis on capacity-building goals. Can provide an overlay toframework that includes more detailedcategories for external goals.

Provides many detailed examples. Bestinformation on measurement strategies,including sample tools. Recognizes thatorganizations often cannot attain change at impact level, so creating influence andleverage are significant outcomes.

Built on theory about what makes changehappen that grows out of interviews withgrantees. Single tool connects developingtheory of change with identifying benchmarks.Written in very accessible format. Included aspart of online grant reporting tool that canhelp program officers look across grantees.

Applicable to wide range of projects.Emphasizes capacity-building.

Emphasizes capacity-building as well aspolicy change. Provides examples especiallyrelevant to collaborative efforts.

Most detailed set of policy-relevant interimand long-term outcomes.

Page 54: The Challenge of Assessing Policy and Advocacy … · Guiding Principles for Policy Change Evaluation 12 ... have been crafting new evaluation ... practiced methodology or widely

Collaborations that CountSource: Power, Participation, and State-Based Politics: An Evaluation of the Ford Foundation’s Collaborationsthat Count Initiative, Applied ResearchCenter, April 2004.

Benchmark Details:Infrastructure Outcomes - new structuresor processes created, new groups incubatedor established.Examples:! Provided technological support for

collaboration member groups.! Changed structure from regional to

statewide, issue-based coalitions. ! Established organization development

funds for groups.

Developmental Outcomes - key leadershiptraining, expanding a group’s membership,developing research and analytical frames,and constituent popular education.Examples: ! Strengthened partnerships with

Latino community.

! Held over 100 workshops on tax fairness.! Held regional power-building meetings.

Policy Outcomes - successfully winning new policies or modifying or blocking policies that did not advance the interests of the collaborative’s constituents.Examples:! Budget cuts to social programs

were limited.! Defended rights of 1.4 million workers

against unlawful workplace termination.! Prevented increase in sales tax on food.

Annie E. CaseySource: Practical Guide to DocumentingInfluence and Leverage in MakingConnections Communities, by JaneReisman, Kasey Langley, Sarah Stachowiak,and Anne Gienapp (OrganizationalResearch Services). Prepared for theAnnie E. Casey Foundation. 2004.

Benchmark Details:Impact Outcomes - change in a conditionof well-being for individuals and familiesdirectly due to/served by the program.

52 THE CHALLENGE OF ASSESSING POLICY AND ADVOCACY ACTIV IT I ES

Appendix E: Sources and Details for Benchmark Frameworks

Page 55: The Challenge of Assessing Policy and Advocacy … · Guiding Principles for Policy Change Evaluation 12 ... have been crafting new evaluation ... practiced methodology or widely

Examples:! A greater percentage of parents and

young adults employed.! Families access more services and supports.! A greater percentage of families own

their own homes.

Influence Outcomes - changes incommunity environments, relationships,institutions, organizations or servicesystems that affect individuals andfamilies, such as changes in issuevisibility, community norms, businesspractices and public will.Examples:! Health care providers offer more

culturally and linguistically appropriate services.

! The school environment is morecomfortable for parents.

! The sense of “neighborhood identity”increases within the community.

Leverage Outcomes - changes ininvestments (monetary, in-kind) thatsupport “impact” and “influence” changes.Examples:! Commercial redevelopment attracts

private investments.! The state government increases

spending on subsidized childcare.! A private foundation provides funds

for an early childhood pilot project.

Women’s FundingNetworkSource: Making the Case: A Learningand Measurement Tool for SocialChange: A Guide for Women’s FundingNetwork Members, Women’s FundingNetwork, 2004

Benchmark Details:Shifts in Definitions/ Reframing - is theissue viewed differently in your communityor the larger society as a result of your work?

Shifts in Individual/CommunityBehavior - are the behaviors you aretrying to impact in our community or the larger society different as a result of your work?

Shifts in Critical Mass/Engagement -is critical mass developing; are people in your community or the larger societymore engaged as a result of your work?

Shifts in Institutional Policy - Has aspecific organizational, local, regional,state or federal policy or practice changed as a result of your work?

Maintaining Current Position/Holdingthe line - Has earlier progress on the issuebeen maintained in the face of opposition,as a result of your work?

Action Aid/Institute forDevelopment ResearchSource: Monitoring and EvaluatingAdvocacy: A Scoping Study, JenniferChapman, Amboka Wameyo for Action Aid, January 2001.

Benchmark Details:Policy Change - specific changes in thepolicies, practices, programs, or behaviorof major institutions that affect the public,such as government, international financialbodies and corporations.

53APPENDIX E : SOURCES AND DETAI LS FOR BENCHMARK FRAMEWORKS

Page 56: The Challenge of Assessing Policy and Advocacy … · Guiding Principles for Policy Change Evaluation 12 ... have been crafting new evaluation ... practiced methodology or widely

Strengthening Civil Society andBuilding Social Capital - increasingcapacity of civil society organizations,enabling them to continue to work or toundertake new advocacy, and increasingoverall social capital, building relations oftrust and reciprocity that underpin thecooperation necessary for advocacy.

Increasing Democracy - enlargingdemocratic space, making publicinstitutions transparent and accountable,expanding individuals’ attitudes, beliefsand awareness of themselves as citizenswith rights and responsibilities.

Alliance for JusticeSource: Investing in Change: A Funder’sGuide to Supporting Advocacy, Alliancefor Justice, December 2004.

Benchmark Details:Outcomes - demonstrate success inobtaining results related to one or more of the organization’s goals or objectives.Examples:! Final adoption of strong regulations

that enforce new anti-pollution laws. ! Successful leveraging of public dollars.

Progress Toward Goals - tracks the stepstaken toward achievement of grantee’sadvocacy goals. There are two types: ! Key activities accomplished

— Example: holding a series of meetings with staff from the governor’s office.

! Incremental results obtained— Request by state agency

officials for more discussion of organization’s suggestions.

Capacity-Building Efforts - demonstratethe strengthening of a grantee’s capacityto achieve advocacy success.Examples:! Developing relationships with

key regulators.! Motivating members of the

organization’s network to contact administrative officials.

Audience: Each of these three benchmarkcategories can be grouped by audience:Executive Branch, Executive Officials(Administrative Agencies, Special PurposeBoards), Judicial Branch, Legislative Branchand Election-Related Activity.

Cross-Cutting Activities: Each of thesethree benchmark categories could involvethe following activities: Policy Change,Constituency Involvement, NetworkBuilding, Coalition building, Mobilizationand Media Advocacy.

54 THE CHALLENGE OF ASSESSING POLICY AND ADVOCACY ACTIV IT I ES

Page 57: The Challenge of Assessing Policy and Advocacy … · Guiding Principles for Policy Change Evaluation 12 ... have been crafting new evaluation ... practiced methodology or widely

Blueprint Research & Design, Inc.720 Market Street, Suite 900San Francisco, CA 94102415.677.9700www.blueprintrd.com

The California Endowment1000 North Alameda StreetLos Angeles, CA 90012800.449.4149www.calendow.org

Page 58: The Challenge of Assessing Policy and Advocacy … · Guiding Principles for Policy Change Evaluation 12 ... have been crafting new evaluation ... practiced methodology or widely

CPA/The Challenge of Assessing PolicyTCE 0201-2006