36 Gordon Square
London WC1H 0PD

T +44 (0) 20 7958 8251
admin@lidc.bloomsbury.ac.uk

A discussion of impact evaluation in development attracts 200 to the 3ie-LIDC symposium

The room was packed full at the 3ie-LIDC Symposium held on 23 May at the London School of Hygiene and Tropical Medicine on the theme of ‘Thinking out of the black box - randomised controlled trials, mixed methods and policy influence in international development’.
The event was opened by LIDC’s Director, Professor Jeff Waage, who stressed the importance of evaluating impact of development interventions and expressed LIDC’s willingness to work together with 3ie (International Initiative on Impact Evaluation) on an exciting series of events on that topic.
Professor Howard White, Executive Director of 3ie, talked about the importance of understanding the context of a development intervention and the importance of ‘unpacking the causal chain’, especially with respect to complex interventions.
 
He emphasised that measurement is not the same thing as evaluation – evaluation is about knowing ‘what worked’. The speaker provided a range of exciting examples of how simple measurement without evaluation can give false results with respect to a programme’s effectiveness, as was in the case of the Bangladesh Integrated Nutrition Programme (BINP).
 
Also, assumptions underlying a development programme should be unpacked, as they often lead to mis-targeting of interventions. For instance, a nutritional intervention may be targeted at mothers, on the assumption that they will use the knowledge to change their children’s diets, without considering the fact that it is men who do the shopping and it is mothers-in-law who have the decision-making power in the household and as such should be the subject of the intervention.
 
Another frequent assumption is that knowledge that is acquired will be immediately applied, while in fact behaviour change is a long and gradual process. Howard White gave examples of numerous projects where no evaluation was conducted at all, for instance school capital grant studies that did not capture how the money was used, behaviour change communication interventions that did not monitor whether behaviour had actually changed...
 
Too often monitoring is in place, but it does not look at the right variables. A case in point is a loan project for self-help groups in India, where monitoring included the number groups formed, but it did not capture the fact that many groups disintegrated quickly due to the fact that they became dominated by literate, higher-caste members, thereby alienating the ones from lower castes. The speaker went on to discuss other examples of lacking or insufficiently robust monitoring and impact evaluation of projects in various parts of Asia and Africa. 
Nancy Cartwright, Professor of Philosophy at the London School of Economics (LSE) talked about ‘why evidence does not always travel’ to improve policy. When talking about ‘what works’, we usually refer to three types of causal claims: It works somewhere; It works, plays a causal role somewhere; It works here. Understanding which is which is essential.
The speaker gave the example of California’s decision to reduce class sizes in order to improve results – the programme failed, as class size was only one of the factors, or ‘ingredients’ necessary for improving educational results. Nancy Cartwright spoke about the importance of context – some things will work in one country or community, but will fail in others. The same cause does not trigger the same effect everywhere.
The presentations were followed by a lively discussion around issues such as how to evaluate policy impact, how to take into consideration the specificities of a local context, and how to view qualitative and quantitative methods jointly.
In conclusion, Howard White said that a good evaluation should be a narrative about what happened, what the intervention was, what it did and how it happened.