MEL
Designing an M&E System for Impact Evaluation: Tips for Program Designers and Evaluators!
AUTHOR:
Athena Infonomics

Not another blog! Isn't measurement old hat?

Excitingly, sixteen new Green Climate Fund (GCF) projects have just been invited to the Independent Evaluation Unit (IEU)'s Learning-Oriented Real-Time Impact Assessment (LORTA) program this year. Depending on ownership, commitment, financial and logistical feasibility, geographic distribution, and measurement capabilities, the IEU will select six of these projects to provide better and timely impact measurement support.

Author Dr. Jyotsna Puri (Jo) in Madagascar where the IEU has a LORTA project.

Two of the most often asked questions are:

  • How do you set up a measurement system for causal impact?
  • What is the difference between a system that monitors programs and one that measures impact?

First, the difference between a monitoring system and an impact measurement system:

A regular monitoring and evaluation (M&E) system can help inform if the intended outputs of the programme have been realized, whereas an impact evaluation can help inform and measure if the outcomes and impacts have been realized and were caused by the programme. If a programme is implemented only partially or not as planned, having a system that only examines impact (by setting up the baseline and endline data and counterfactuals) cannot distinguish between a badly designed- and a badly implemented-programme.

A strong programme monitoring system can provide important information about the quality of programme implementation, targeting, uptake, attrition, etc. By itself, it cannot tell you much about impact. A good programme monitoring system can thus inform implementation fidelity and other sources of bias for impact evaluations such as spill-overs and confounding factors. It can also help inform if the hypothesized causal pathway is working as anticipated and provide evidence if the theory of change is true. A good impact measurement system (counterfactual-based evaluation) requires a good monitoring system (factual assessment of programme implementation).

Source: Howard White, various impact evaluation presentations.

Second, so what are the important components of a regular programme monitoring system?

Regular programme monitoring systems (should) track data at every stage of programme implementation, including the following:

  • who is being served by the programme;
  • how beneficiaries are targeted;
  • the service, frequency, duration, quality of programme delivery; and,
  • uptake, drop-out and programme outputs.

A good M&E system must have a clear set of indicators, as well as pre-specified rules or protocols to measure these indicators (including who collects data for these indicators, when, and how the indicators are computed, including their definitions and units of measurement). The protocols should also specify the technology used to collect the data and the staff who compiles and interprets the data. All these are essential for understanding and measuring field efficacy and implementation fidelity.

Stakeholder engagement at the IEU's LORTA project in Zambia.

Understanding implementation quality is key to differentiating between implementation failure and design failure. Knowing which one was the reason for the lack of impact, in turn, can help us provide appropriate solutions for the problem.

Third, what can regular and good programme monitoring and evaluation offer to impact evaluation?

  • Programme monitoring has the potential to provide factual, quantitative, and qualitative information about the quality and the nature of programme implementation. Most real-world development programmes are not implemented as originally planned due to logistical and political challenges. Understanding the original plan and how the programme has deviated or adhered to this plan (i.e. understanding implementation fidelity) is key to attributing the observed change in the outcomes to the programme. Monitoring data can provide this information about the programme implementation, such as the nature and quality of the service provided and the type and attributes of beneficiaries.
  • Regular programme evaluation provides routine but salient data about whether output targets have been met. As part of these, process evaluations can help to assess implementation fidelity, success in targeting planned beneficiaries, validate key assumptions hypothesized in the theory of change, and provide important information about the field efficacy of planned implementation structures and processes. Programme evaluation can also help assess if a programme has progressed enough to do an impact evaluation.

Fourth, how should one build an Impact Evaluation-ready-M&E system? Here are some key steps included in the form of a checklist:

  • Articulate the theory of change with inputs from all the stakeholders including the evaluation team.
  • Validate salient links in the hypothesized theory of change in the field.
  • Identify key theories that are being applied in different parts of the theory of change.
  • Be explicit about assumptions.
  • Identify information/data needs.
  • Put in place monitoring and information systems to track inputs, activities, processes, and immediate outputs. Develop protocols for measuring these indicators, their definitions, frequency, source, data required and field methods and personnel to report, collect and analyze this data.
  • Develop a management information system, and train relevant staff.
  • Think about measuring attributable change. Do an evaluability assessment and discuss why an impact evaluation is important (Is it for learning? Is it to scale-up a programme? Is it to inform a new and innovative approach? Is it for accountability and reporting? Is it for developing a body of evidence?)
  • Identify intermediate outcome and potential impact indicators across program areas. These are informed by the theory of change.
  • Develop key indicators and protocols for measuring this data.
  • Use robust and regular qualitative data and approaches to inform, validate or understand exceptions to the overall theory of change.
  • Understand and analyze sources of bias (specifically, selection bias and programme placement bias).
  • Set up explicit or implicit counterfactuals.
  • Pay close attention to challenges to external validity: use disaggregated data and attend to important areas like unintended consequences for different target sub-groups, gender impacts, equity and heterogeneity in general.
  • Train program staff and M&E professionals in protocol development, data collection, methods, analyses and interpretation.
  • Analyze and understand design efficacy, implementation fidelity and causal impact.
  • Review, reflect and update the system based on impact evaluation findings.
  • Undertake cost and cost-effectiveness studies.

Conclusion

  • Real-world interventions are seldom implemented as planned.
  • Impact evaluation findings can inform the magnitude of change caused by a programme and encourage donors and policymakers to credibly claim success.
  • Good regular programme monitoring and evaluation can help explain why a programme worked or didn't. Programme M&E can help managers and policymakers course correct and manage adaptively.
  • Good monitoring and evaluation can help distinguish between design and implementation failure and successes.
  • Impact evaluations can be extremely useful in advocacy, in informing policy and strategy and in raising and leveraging additional resources.

(This article was first published on the website of the Independent Evaluation Unit, Green Climate Fund, and has been republished with permission.)

References

Puri, J (2012), Getting evaluation right: a five point plan, October 22, Oxfam blog.

Burt Perrin (2012), Linking Monitoring and Evaluation to Impact Evaluation, Impact Evaluation Notes. No. 2. April, www.interaction.org/impact-evaluation-notes

Puri, J. A. Rastogi, M. Prowse, S. Asfaw ‘Good will hunting: Challenges of theory-based impact evaluations for climate investments in a multilateral setting' World Development, Volume 127, March 2020.