Methods Archive


A Curated List of Postings on Technical Topics – Your One-Stop Shop for Methodology | Impact Evaluations

The World Bank offers

A Curated List of Our Postings on Technical Topics – Your One-Stop Shop for Methodology | Impact Evaluations.


How to build a theory of change for an impact evaluation | Video by Howard White

Speaker: Professor Howard White, Former Executive Director, 3ie

A comprehensive theory of change is integral to designing a high quality and policy relevant impact evaluation. In this video, Professor Howard White uses the example of a school feeding programme to illustrate the steps involved in building a theory of change for an impact evaluation.

For more videos with Professor Howard White and for lectures on other topics related to impact evaluations and systematic reviews, click here.

3ie’s How-To videos

3ie’s How-To videos on impact evaluation use a simple step-by-step approach for explaining theoretical concepts. The videos draw from examples of impact evaluations to show you how to apply technical concepts.

The videos in this series explore various topics related to designing, implementing and using impact evaluations. A short quiz at the end of each lecture will help you assess your understanding of the subject covered in the video.


Participation in evaluation blogs and webinar | Better Evaluation

How do you decide who participates in evaluation? What are the power dynamics at play? And how can participation be used to benefit people? This month, BE is shining a spotlight on participation in evaluation. Throughout the month, Better Evaluation published blog posts on some of these key questions.
Blogs | Better Evaluation.


Integrating Human Rights and Gender Equality in Evaluations | DME for Peace

Integrating Human Rights and Gender Equality in Evaluations | DME for Peace.

This Guidance is aimed at increasing knowledge on the application of these two approaches in evaluation processes but also at raising awareness on their specific relevance and significance for UN work. It complements the UNEG’s Handbook ‘Integrating Human Rights and Gender Equality in Evaluation: Towards UNEG Guidance’, an abridged version that outlines practical steps on how to prepare, conduct and use HR & GE responsive evaluations. The present document deepends each of these aspects, and provides additional theoretical and applied information, tools and suggestions.


EuropeAid’s evaluation and results-oriented monitoring systems do not provide adequate information

EuropeAid’s evaluation and results-oriented monitoring systems do not provide adequate information on EU development expenditure results, says the European Court of Auditors.

Two of the key elements of the accountability framework operated by the European Commission’s Directorate-General for Development and Cooperation (EuropeAid) are its evaluation and results-oriented monitoring (ROM) systems. In its special report published today, the European Court of Auditors (ECA) is critical of the reliability of these systems.

Karel Pinxten, the ECA Member responsible for this report, said: “The demand for accountability for EU expenditure in all fields has never been higher. It is not good enough to report achievements in vague global terms. The Commission needs to have the building blocks necessary for a comprehensive reporting system which provides meaningful information for its own management and for its external stakeholders. One of these components is a strong evaluation system which feeds into an overall reporting process. At the present time, EuropeAid’s system is inadequate.” ”The evaluations of projects and programmes which are organised by Commission delegations and carried out in partner countries are unsatisfactorily managed: overall supervision is inadequate, the amount of resources used is unclear and access to the results of these evaluations is lacking,” according to Mr Pinxten.

Most programme evaluations are carried out before the impacts and sustainability of measures can be ascertained. There is, generally, no requirement for ex-post evaluations and, as a result, these are rarely carried out. Indeed, whereas ROM contractors previously carried out ex-post exercises in a certain percentage of cases, this practice has recently been discontinued. There is therefore a serious lack of third-party assessment of impacts and sustainability. The auditors found thematic and country evaluations (strategic evaluations) to be better managed and more results-focused than programme evaluations. However, the absence of well-defined objectives and indicators frequently hampers the work of the evaluators and limits the usefulness of their work. In addition, the planned strategic evaluation programme for the 2007-2013 period was not executed in full.

The systems in place do not ensure that maximum use is made of the findings of the evaluations. Weaknesses were found in the follow-up not only of programme evaluations but also of strategic evaluations and ROM findings.

The detailed recommendations in the report are intended to pave the way for the necessary improvements. Given the considerable sums involved, with annual development expenditure in the region of 8 billion euro, it is imperative that robust evaluation systems are implemented without delay.

The full report is on Special Report n° 18/2014: EuropeAid’s evaluation and results‑oriented monitoring systems


Evidence Principles  |  Bond

The Bond Evidence Principles and checklist are for assessing and improving the quality of evidence in evaluation reports, research reports and case studies.

They have been designed specifically for NGOs and can be used when commissioning, designing and reviewing evidence-based work. The principles help ensure that decisions about projects and programmes are made on the highest quality basis.

Bond is the UK membership body for organisations working in international development or supporting those that do through funding, research, training and other services.

– See more at:


Advocating for Evaluation: A toolkit to develop advocacy strategies to strengthen an enabling environment for evaluation

By: EvalPartners in collaboration with other partners

This work is inspired by Paris 21’s advocacy tool-kit and aims to fill a gap at the bottom of the “capacity pyramid” (on the enabling environment for evaluation) by advocating for the “demand side” of evaluation. The toolkit was jointly developed by EvalPartners, IOCE and UNWOMEN in partnership with UNEG, UNICEF, the Ministry for Foreign Affairs of Finland and USAID. The final publication “Advocating for Evaluation: A toolkit to develop advocacy strategies to strengthen an enabling environment for evaluation” is now available, for free download here and e-version. The focus of the toolkit is to help civil society organisations, VOPEs, governments and other development partners to: learn how strategic advocacy can be leveraged to increase the demand for evaluation; acquire essential skills to become an effective advocate for building an enabling environment for evaluation; devise a long-term advocacy strategy to develop and implement equity and gender sensitive national evaluation policies and systems;  respond quickly to seize any unplanned advocacy opportunity to build a culture of evaluation. We hope the toolkit is useful to members and partners. An e-learning tool is currently being developed (led by Marco Segone) to support learning from the document and broader use.



UNEG Resource Pack on Evaluation

This UNEG interactive resource kit contains both a guidance note and a practitioner’s toolkit. It covers joint evaluation management and governance structures, and outlines recommended steps for undertaking joint evaluation, including con-sideration of gender and human rights issues. See UNEG News and Updates.


Most Significant Change (MSC) Workshop, Berlin, Germany 21-22 July 2014 Storytelling


Sick of reporting on just numbers and outputs ?  Do you value useful information on project impacts including unexpected ones?  Do you want to hear how you have made a difference to people’s lives in their own words and what is important to them?

A 2 day course in the Most Significant Change (MSC) technique in Berlin (July 21/22) will provide you with the knowledge and a plan to meet these needs.

MSC is a story based participatory monitoring & evaluation technique ideally suited to providing qualitative information on project /program impact.  MSC is also an excellent tool for encouraging stakeholder participation, surfacing stakeholder values, fostering organisational learning and program improvement. It is used widely in international development and social change projects.

This is a unique opportunity to have MSC training in Germany, from an experienced MSC professional who has delivered training in MSC over 30 times.

 If you are interested in further information contact Theo Nabben at or visit The workshop flyer is at (PDF, 734 KB)


IEG’s Evaluation Capacity Development Website

The Independent Evaluation Group of The World Bank Group hosts an Evaluation Capacity Development webpage. It features resources to help build evaluation knowledge and skills, “how to” help guides to design and manage evaluation process and help in using evaluations as a means to improve project performance.

Evaluation Capacity Development (ECD) efforts strengthen and sustain both individuals and organizations to:

  • Access, build, and implement evaluative knowledge and skills;
  • Design, implement, and manage effective evaluation processes and findings; and
  • Use evaluations as a performance improvement tool.

Quick Search

ECD Working Papers ECD Guides

IEG also has Resources for Evaluators/Networks at

The World Bank Group also offers “A Curated List of Our Postings on Technical Topics – Your One-Stop Shop for Methodology”

Of course this is all included in my