RBM Training Kit: Module 7

48
TRAINING KIT MODULE 7 RESULTS-BASED MANAGEMENT: MONITORING & EVALUATION AND INFORMATION SYSTEM

description

Results-Based Management: Monitoring & Evaluation & Information System

Transcript of RBM Training Kit: Module 7

Page 1: RBM Training Kit: Module 7

TRAINING KIT

MODULE 7

RESULTS-BASED MANAGEMENT:

MONITORING & EVALUATION

AND INFORMATION SYSTEM

Page 2: RBM Training Kit: Module 7

Summary

● Monitoring and Evaluation Concepts in RBM

● Differences and Similarities between monitoring and

evaluation

● Monitoring

● General evaluation concepts

● Information system in support of M&E

Page 3: RBM Training Kit: Module 7

The Logical Framework can be used as the foundation of

a programme or project’s Monitoring & Evaluation (M&E).

The 2nd and 3rd columns of the Logical Framework Matrix

(LFM) constitute the basic elements of a M&E system:

they define the performance indicators, set the targeted

objectives to be achived, and describe the system’s

information sources.

Basé sur The Logical Framework Handbook

de la Banque Mondiale

Link between Logical Framework and M&E (1/2)

Page 4: RBM Training Kit: Module 7

So

urc

e: F

IDA

, 200

2

Link between Logical Framework and M&E (2/2)

Project strategy (Plan and operational modalities)

Monitoring and evaluation system

Detailed operational plan

Project outputs, outcomes and

impacts

M&E system design

Collect and information management

Thinking for improvement

Communication on results and reports

Base for

Base for

Implemen-tation

Sou

rce:

IFA

D 2

00

1

Adaptation and development

Information

Field data

Improvement through M&E

Co

nti

nu

ou

s ad

just

men

t

Page 5: RBM Training Kit: Module 7

Monitoring Performance in RBM

Page 6: RBM Training Kit: Module 7

Monitoring = a systematic process of verifying the

effectiveness [effects] and efficiency [outcomes/outputs] of a

development intervention’s implementation (programme,

project) in order to:

– Assess progress towards results and identify insufficiencies (or

gaps); and,

– Recommend corrective measures for otpimising desired results.

In the life cycle of a programme, monitoring does not take

place before the implementation phase.

It is based on specific indicators during the design phase of

the programme.

What is Monitoring?

Page 7: RBM Training Kit: Module 7

Results-Based Management (RBM) = Management strategy for

a project/ programme focused on performance, the attainment

of outputs and the accomplishment of direct effects.

In the case of a RBM approach, "good monitoring" is:

Continuous and systematic;

Participation of key stakeholders in a development

intervention; and,

Particular attention to the achievement of anticipated

results.

In some programmes, key stakeholders include beneficiaries,

the executing agency, programme, public administration

minister, etc.

Distinction of Monitoring in RBM

Page 8: RBM Training Kit: Module 7

Impacts: General improvements in

the medium and long terms that a

development intervention (policy,

programme, project) can bring to

society.

Effcets: Initial and intermediary

effects resulting from

development intervention due to

beneficiaries’ use of

outcomes/outputs generated by

that intervention.

Outcomes/Outputs: Goods and

services produced and delivered

by a development intervention.

Interest in Monitoring in RBM

Goal / Impact

Outcome

Output

Activity

Input/resource

Trad

itio

nal

mo

nit

ori

ng

RB

M M

on

ito

rin

g

MfD

R

High objective to which the program contributes Program’s raison d’être : changes according to beneficiaries’ expectations to be achievd by program Goods and services to deliver during program implementation – to achieve. Tasks and actions necessary to transform inputs in outputs Human, financial and material resources necessary to undertake activitities

Page 9: RBM Training Kit: Module 7

Monitoring = continuous process of systematic

collection of information on chosen indicators for

an ongoing development intervention

Evaluation = systematic assessment of design,

execution, efficiency, effectiveness, process, and

results of an ongoing or completed

programme/project.

Monitoring is continuous, evaluation is occasional

or periodic (undertaken at a specific time).

Evaluation can take place at different stages of

programme cycle and often draws on external

specialists; not involved in the execution of the

programme to be evaluated.

Difference between Monitoring and Evaluation

Page 10: RBM Training Kit: Module 7

Similarity between Monitoring and Evaluation(1/2)

Managing for development results

Monitoring Evaluation

Continuous process to systematically collect select indicators during program

implementation

Systematic and objective assessment of an

implemented program from beginning to the end

Evaluation (different types)

Idea Monitoring for results

Monitoring Implementation

Year 1 Year 2 Year 3 Year 4 Year 5

Approval Closing

Page 11: RBM Training Kit: Module 7

Criteria Monitoring Evaluation

Frequency Regular, continuous Occasional, periodic

Coverage All programmes Certain programmes and aspects

Objective Links programme activities and

resources to objectives/results

Identifies causal contributions of

activities to objectives/results

Positioning Internal activity Internal, external, participatory

Data Generally from beneficiaries Based on a sample

Information Depth Compare results to targets

Focus: WHAT

Examine unacieved results

Focus: WHY

Cost Spread over the entire duration Could be increased

Use Constant management and

improvement of the programme and

its performance

Taking major decisions on a

programme

Dissemination Progress report and alert on

problems

Provide lessons and recommendations

and highlight significant achievements

Responsibility Programme Director Evaluator with the manager and staff

Similarity between Monitoring and Evaluation (2/2)

Page 12: RBM Training Kit: Module 7

To improve programme performance

and the quality of achievements.

To learn from experiences on the

ground.

To develop clear corrective measures

and take good decisions.

Finally, to ensure the achievement of

anticipated results and plan while

executing.

Why Monitor?

Page 13: RBM Training Kit: Module 7

All sites where the programme takes

place.

Involving communities and

beneficiairies.

With data collection tools.

According to the indicators set while

executing the programme.

By assessing quantitatively,

qualitatively and in real time, all

programme achievements …

Where and how does monitoring intervene?

Page 14: RBM Training Kit: Module 7

Be careful of resistance!!!

Page 15: RBM Training Kit: Module 7

Think about and organise for M&E right from

the idea stage (1st stage of theproject cycle)

and throughout implementation.

Involve key stakeholders in developing the

M&E plan for the programme (promote

agreement on anticipated results and the

required performance, strengthen

engagement and trust, etc.).

Exhibit firmness and rigor in executing the

M&E plan of the programme.

How to do successful monitoring?

Page 16: RBM Training Kit: Module 7

Strongly recommended to clarify who the

stakeholders are in monitoring and to

specify their roles and responsibilities ("Who

does what and when?")

Strongly recommended to include identified

stakeholders right from the start when

putting together the M&E plan for the

programme.

Necessary to train these stakeholders on

M&E concepts, according to their assigned

roles and responsibilities.

Who participates in Monitoring?

Page 17: RBM Training Kit: Module 7

Programme Evaluation

Page 18: RBM Training Kit: Module 7

Evaluation helps to respond to questions such as:

What are the programme effects and impacts?

Is the programme evolving as anticipated?

Were the accomplished activities executed as planned(quantity,

quality, duration)?

What contributed to the changes identified through monitoring?

Are the identified differences between the various programme sites

due to the way the programme was operated?

Who really benefits from the programme and its ripple effects?

Definition of Evaluation

Systematic and objective assessment of the conception, execution and

results of an ongoing or completed project, programme or policy, in

order to determine its relevance and attainment of objectives, efficiency

with regards to development, effectiveness, impact and sustainability.

Definition and Questions about Evaluation

Page 19: RBM Training Kit: Module 7

1. Descriptive Questions: to show what is happening (describes

the process, prevailing conditions, organisational

relationships and points of view of various stakeholders in the

programme).

2. Normative Questions: to compare what is happening with

what was planned (activities, achievements, fulfilled or non-

fulfilled objectives atteints). Could also be relevant to

resources/inputs, activties, and outcomes/outputs.

3. Cause and Effect Question: to focus on results and to try to

determine to what extent the programme is fueling change.

Evaluation: 3 fundamental questions

Page 20: RBM Training Kit: Module 7

Project A single development intervention executed on one or many

sites.

Programmes Intervention which includes various projects that contribute

to a common objective..

Policies Norms, instructions or rules established by an organisation

to regulate, organise or implement development decisions.

Organisations Multitude of intervention programmes implemented by an

organisation.

Sectors Interventions in the same sector such as education, health,

forestry, agriculture.

Themes Interdisciplinary themes like equality, a gender approach,

global public goods

Country

Assistance

Progress in the national development plan, the effect of

development aid, and lessons.

Source: Morra Imas & Rist (2009).

What can be evaluated?

Page 21: RBM Training Kit: Module 7

1. Need for evidence on what works(bad performance and

budgetary restrictions can be damaging!!!).

2. Need for improving programme execution and the performance

of public organisations(for example, to improve the design of

social programmes and methods for targeting beneficiaries).

3. Need for reliable information on the sustainability of results

obtained by a programme (does the programme lead to

sustainable solutions to problems by addressing causes?).

Why must an intervention be evaluated?

Page 22: RBM Training Kit: Module 7

Formative Evaluation: seeks to improve performance, usually

undertaken during programme implementation. Sometimes called

process evaluation for a study of internal dynamics of organisations.

Example: Mid-term evaluation.

Sources: OCDE (2002); Morra Imas & Rist (2009).

Summative Evaluation: conducted at the end of the prgramme (or at the

end of a programme phase), seeks to determine the level of achievement

of anticipated results. Sometimes called ex-post evaluation. Example:

Impact evaluation.

Prospective Evaluation: assesses results and potential objectives of a

programme before its launch and their probability of being achieved.

Conducted before the launch, also known as an ex-ante evaluation.

Example: Cost-benefit analysis.

Evaluation types and the programme cycle (1/2)

Page 23: RBM Training Kit: Module 7

Evaluation types and the programme cycle (2/2)

Mid-Term

Ex-ante

Idea

Evaluation (different types)

Monitoring Implementation

Year 1 Year 2 Year 3 Year 4 Year 5

Approval Closing

Ex-Post

Prospective evaluation

Formative evaluation

Summative evaluation

Page 24: RBM Training Kit: Module 7

Programme Evaluation: evaluation of a set of structured development

interventions for achieving specific development objects at the sector,

country, regional, or global levels.

Project Evaluation: evaluation of a single development intervention

designed to achieve specific objectives with resources and a set work

plan, often in the context of a larger programme.

The evaluation could be:

• Internal: by evaluators who rely on the donor or organisation.

• External: by evaluators who are outside the donor or organisation.

• Independent: by evaluators who are not linked to those in charge of

design or execution.

Project or programme evaluation

Page 25: RBM Training Kit: Module 7

Relevance Measure according to programme objectives which correspond to

beneficiary expectations, country needs, global priorities, partner

policies and donors.

Effectiveness Measure a programme’s achieved results – or in process of being

achieved, bearing in mind their relative importance.

Efficiency Measure which of the programme resources have been

transformed into outcomes/outputs at better cost. Sometimes

requires an economic analysis of different alternatives.

Impact Assessment of long term effects, positive and negative, primairy

and secondary, resulting from a programme, directly or otherwise,

intentionally or otherwise.

Sustainability

Assessment of the sustainability of benefits resulting from a

development intervention after a programme’s completion.

Probability of gaining long-term benefits.

Source: OECD (2002)

What to evaluate: the 5 main criteria

Page 26: RBM Training Kit: Module 7

Source: Adapted from Rodriquez-Garcia & Kusek (2007). Translated by MM

Evaluation from a RBM perspective

Page 27: RBM Training Kit: Module 7

Evaluation from a RBM perspective

Goal / Impact

Outcome

Output

Activity

Input/resource

Pro

gram

min

g St

rate

gy

Long term results. I.e. reduced number of people living in poverty. Consequence of agricultural program. Mid-term outcomes (what beneficiary achieve due to new access to services, etc), i.e. greater agricultural yields. Short term results or outputs (what managers or those responsible of the project do), i.e. access to services, awareness campaign. Short term activity results (what project managers plan to achieve planned outputs), i.e. preparatory meetings, training events. Short term input results (what project managers and development partners put as resources for the project), i.e. agricultural inputs.

Source: Adapted from Rodriquez-Garcia & Kusek (2007). Translated by MM

Page 28: RBM Training Kit: Module 7

As a general rule, an evaluation becomes necessary periodic data

from monitoring show that ongoing performance is clearly and

significantly different from what was planned.

Source: Adapted from Kusek & Rist (2004) by MM

Planifié

Réalisé

EMP

EI

EMP

EI

EMP

EI

Évaluation à mi-parcours

Évaluation d'impact

When is evaluation necessary?

Page 29: RBM Training Kit: Module 7

A theory of change describes a plan for social change from the

formulation of hypotheses before design to the definition of long term

objectives. This theory is often presented in the form of a diagram

(logical model) which analyses the links between resources and results.

It is often presented in form of a table outlining the stages, data or

resources until the achievement of the objective envisioned by the

programme (logical framework).

Source: Grantcraft (2006)

Building a theory of change allows an evaluator to:

Understand the philosophy upon which a programme is based.

Examine existing evidence through a research systhesis.

View a complex programme as a chain of interventions aimed at

behavorial changes.

Change Theory and Evaluation

Page 30: RBM Training Kit: Module 7

Quantitative methods: numerically assesses certain aspects of an object of

evaluation. More suitable for formulating statistical and generalisable

conclusions. Example: Survey. Shortcoming: Sampling (question of external

validity).

Qualitative methods: often used to get to the depth of qualitiative aspects of the

object of evaluation. Suitable for their flexibility and easy use. Example: Focus

group. Shortcoming: The evaluator plays the role of a facilitator.

Mixed methods: complementary combination of quantitative and qualitative

methods in order to collect quantifiable data and qualitative assessments.

Example: Direct observation during a survey interview. Shortcoming: Good

methodoligcal combination and the risk of triangulation.

Evaluation and collection methods (1/2)

Page 31: RBM Training Kit: Module 7

Interview with

stakeholders

Community

Forum

Informal methods/ Less structured

Site

Visits

Review of official

register(GIS and

admin. data.)

Participatory

observation

Interview with

well-informed

people

Focus

Group Survey

Questionnaire

Panel survey

Field Experiment

Inventory

Formal methods/ More structured

Direct

observation

Sources: Kusek & Rist (2004); Morra Imas & Rist (2009). English translation by MM

31

Evaluation and collection methods (2/2)

Page 32: RBM Training Kit: Module 7

Évaluation participative: Méthode collective d'évaluation selon laquelle

concernés et bénéficiaires collaborent pour concevoir et conduire une

évaluation et en tirer les conclusions. Ses principes de base sont:

Sources: OCDE (2002); Morra Imas & Rist (2009)

Implication des bénéficiaires dans la fixation d'objectifs et des

priorités, la sélection des questions et la prise de décisions.

Appropriation par les participants de l'évaluation.

Assurance que l'évaluation se concentre sur les méthodes et

résultats qui sont importants aux participants.

Participants travaillant ensemble, facilitant et promouvant l'unité du

groupe.

Tous les aspects de l'évaluation sont compréhensibles et pertinents

à tous les participants.

Évaluateurs agissant comme des facilitateurs; participants agissant

comme preneurs de décision et évaluateurs.

Évaluation participative et programmes sociaux

Page 33: RBM Training Kit: Module 7

Part of methods which contribute training

development specialists and improving the design of

policies and programmes.

Focused especially on cause and effect relationships

in programmes (structured around a key question:

what is the impact– or the causal effect– of a

programme on a given result?). This causal aspect is

vital.

Aimed at examining which changes can be directly

and exclusively attributed to the programme.

Impact Evaluation: cause and attribution

Page 34: RBM Training Kit: Module 7

Programme Impact is the difference between observable

effects resulting from the programme and those observed

when there is no programme intervention.

Challenge: it is difficult to observe the situation of

beneficiaries simultaneously…

With the

programme

… and without

the programme!

What is impact in RBM?

Page 35: RBM Training Kit: Module 7

To estimate the impact (or causal effect) of a

programme, there is need for a

conterfactual, i.e. the result obtained by the

beneficiaries if the programme were not

there.

To estimate the counterfactual, there is

need for a control group (or a comparison

group) which meets these criteria:

The control group must have the same

characteristics as the programme

beneficiaries;

The only difference between the two

groups is that members of the control

group are not programme beneficiaries.

Impact Evaluation: The need for a counterfactual

Page 36: RBM Training Kit: Module 7

Before-After (or pre-post): Simply compare the

results of the beneficiary group before and after

programme implementation.

There are many other variable factors at any

given time which could also influence observed

results.

!

With-without: Simply compare members which have

been accepted into the programme with members

which are not participating in the programme.

Those who are not participating in the program

could be systematically different from those

who are participating.

Impact Evaluation: 2 common errors

!

Page 37: RBM Training Kit: Module 7

Experimental apparatus: random, generally considered

to be the most robust evaluation apparatus. A control

group, presents a perfect counterfactual comparison

free from different biases and distortions. A before-

after comparison helps assess the real contribution of

the programme.

Quasi-experimental apparatus: used when a random

composition of comparison groups is not possible.

Less robust than experimental apparatus, used

depending on the context and the resources available.

Impact Evaluation: Study apparatus

Page 38: RBM Training Kit: Module 7

Implementing an evaluation

Evaluation preparation

•Decide what to evaluate / Define objectives / Estbalish hypotheses Theory of change / Results chain / Choose indicators

Evaluation implementation

•Choose evaluation methodology / Ensure ethics / define evaluation team / Determine schedule / develop budget

Sample identification

•Decide sample size / choose sampling methodology

Data collection

•Decide what data to collect / identify data collection company / develop list of questions and test it / field work / data validation

Produce and disseminate results

•Data analysis / evaluation report writing / present and discuss results with decision makers / disseminate results

Page 39: RBM Training Kit: Module 7

Evaluation conclusion Provide clear, precise responses to evaluation questions posed in the

TORs (show causal relationships).

Very often, presence of value judgements (potential conflicts).

Ethically, a conclusion must be linked to data and analysis.

All questions must be answered in the conclusions. Otherwise…

Methodological limits and context: highlight the robustness of a link

between data and conclusions if the analysis can be generalised .

Evaluation recommendations Represent suggestions for improving, reforming or renewing the

programme.

Draw one or two conclusions vis-à-vis the problems.

Prioritised and ranked with specific receipients.

Source: Euréval (2010)

Evaluation conclusions and recommendations

Page 40: RBM Training Kit: Module 7

Dissemminating evaluation results An entirely separate stage of the evaluation process, after the

production and validation of the evaluation report, but planned from the

start.

Indispensible stage for potential users to utilise the evaluation

(transparency essential).

Based on different users, different communication streams are

employed. Using evaluation results

Taking decisions, help in forming judgments, to know the programme

effects.

Can be used differently by diferent users.

Must be anticipated from the beginning and guide the evaluation launch.

Source: Euréval(2010)

Sharing/use of evaluation results

Page 41: RBM Training Kit: Module 7

Information system in support of M&E

Page 42: RBM Training Kit: Module 7

Source: IFAD, 2002

M&E and reporting: Importance of an information system

Des

ign

of

a m

anag

emen

t in

form

atio

n

syst

em i

n s

up

po

rt o

f M

&E

.

1. Develop logical

framework matrix

I ndicators Verification sources

2. Develop M&E matrix Indicators Baseline

Caracteristics

Pro

ject

dev

elo

pm

ent

Pro

ject

imp

lem

enta

tio

n

Data

collection

form

Aggregation for

1

2

3

25%

25%

3. Data collection and aggregation Aggregation in a database

4. Analysis to identify early findings

5. Communicating results

25%

25%

En

d o

f ev

alu

atio

n im

ple

men

tati

on

Page 43: RBM Training Kit: Module 7

After finalising the logical framework of the programme, develop

a monitoring and evaluation plan for the programme, including:

Definition of the data collection methodology for monitoring

(sources, frequency, transmission mode, etc.);

Definition of the assessment and analysis methodology for

collected data;

Designation of support mechanisms for disseminating

monitoring information;

Definition of the methodology for different programme

evaluations and creation of their terms of reference;

Definition of the methodology for undertaking different audits

(institutional and technical), if necessary, and develop

different terms of reference pertaining to it…

After the Logical Framework, the M&E Plan…

Page 44: RBM Training Kit: Module 7

After developing a monitoring and evaluation plan for the

programme, it’s time to design and implement a Management

Information System in support of the programme’s monitoring

and evaluation:

Create an inventory for the information system

(infrastructure, protocols, contacts, etc.);

Analyse the existing information system, conceptually and

functionally, and identify gaps based on the monitoring and

evaluation needs of the programme in question;

Create a M&E database;

Schedule implementation ahead of managing the M&E

database;

Finalise the establishment of the information system;

Train users of the information system.

After the M&E Plan, the Information System…

Page 45: RBM Training Kit: Module 7

Definition of a database

In general, a database is a set of organised documents,

generally structured in coloumns and in table form.

For electronic databases, computer science speak of dataset

that is structured and organised in such a way that a computer

application can quickly select desired elements from this set.

The most common type of database in the world is the relational

database. In such a database, the data is not presented in the

same table, but in different tables with links between them.

Information System: importance of the database

Page 46: RBM Training Kit: Module 7

Makes data immediately available when the need arises;

Always ensures the availability of data in a format which allows for

different analysis without manual calculations;

More effectiveness and precision in management and data usage;

Allows for comparing different data elements;

Quick and precise handling of large data sets;

Reduces data analysis processes and time spent in managing data;

Transforms disparate data into consolidated information;

Improves the quality, speed, and understanding of information;

Supports spatial analysis with the help of a geographical

information system (GIS) and the presentation of data on maps for

easy comprehension by decision-makers…

Advantages of a M&E Database

Page 47: RBM Training Kit: Module 7

Never expect technology to "have all the answers" when it comes

to M&E.

Take into consideration government policy on Information and

Communication Technology when handling databases of public

agencies.

Keep daily track of the functionality and security of the database in

order to ensure integrity, availability and data quality.

Identify data which must be included in the database.

Determine what software or application will be used for analysis.

Bear in mind that the availability of a spatial analysis software is

helpful.

Take all necessary measures for merging a new database with

existing ones (data transfer)

Identify capacity building needs in design and management from

the start in order to improve database usage and information

access.

Using databases: To remember!!!

Page 48: RBM Training Kit: Module 7

Thank you for your attention.

AfCoP Web Site: http://copmfdrafrica.ning.com