Evaluation reports frequently blame poor monitoring data for preventing a full assessment of programme outcomes. Meanwhile, programme staff often complain that evaluations come too late, too infrequently, and don’t contain much useful information. This illustrates a common problem: the disconnect between monitoring and evaluation reduces the effectiveness of both.
A new paper from the DCED addresses this challenge and explores the synergies between M and E, using the example of the DCED Standard, a widely used results measurement framework. Why should evaluators be interested in monitoring? How can monitoring systems support evaluations, and vice versa? Who is responsible for what, and what are the expectations of each?
Download: Why_Evaluations_Fail_Aug2014.pdf (PDF, 650 kB)
Source: The Donor Committee for Enterprise Development