36 Gordon Square
London WC1H 0PD

T +44 (0) 20 7958 8251
admin@lidc.bloomsbury.ac.uk

LIDC conference calls for more inter-sectoral and interdisciplinary collaboration to better understand impact of higher education for development

On 19-20 March 2012 LIDC and the Association of Commonwealth Universities (ACU) held an international conference ‘Measuring impact of higher education for development’. More than 100 experts in international development, higher education and impact evaluation participated in this two-day series of presentations and discussions at Birkbeck College, representing various disciplines and sectors: academia, donors and funders, policy-makers and civil society.
Higher education (HE) has been relatively neglected compared to basic education both in terms of investment in interventions and evidence base of what works. By establishing dialogue between the HE sector, the international development sector, and the evaluation community, the LIDC-ACU conference aimed to facilitate their collaboration towards better evaluation of HE interventions. 
 Session 1 raised the question of what the development intention of HE interventions ought to be. Hilary Perraton (ppt) from the London International Development Centre presented results of a baseline study produced by LIDC specifically for the conference – a review of evaluation methods used in evaluating scholarships and capacity-building programmes. Tom Wingfield (ppt) from the UK Department of International Development (DFID) presented a donor’s view on reasons behind evaluating HE interventions, and Voldemar Tomusk from the Open Society Foundation gave the perspective of a private funder. Joseph Gafaranga (ppt) from the University of Edinburgh talked about the Rwanda-Scotland Higher Education programme (RSHEP), taking the perspective of the Southern partner.
Session 2 focused on approaches to impact evaluation for HE. Ad Boeren (ppt) from the Netherlands Organisation for International Cooperation in Higher Education (NUFFIC) talked about the purpose and methods for evaluating scholarship programmes. Sarah Vaughan, British Council and Liam Roberts (ppt), Association of Commonwealth Universities (ACU), presented a case study of evaluating the DelPHE Programme. Marta Tufet (ppt), Wellcome Trust, and Sonja Marjanovic (ppt), RAND Europe, talked about evaluating complex development interventions in real-time on the example of the African Institutions Initiative (AII).
In the second part of session 2, Tim Unwin, Chair of the Commonwealth Scholarships Commission (CSC), talked about evaluating the Commonwealth Scholarships programme. Elliot Stern, Professor Emeritus, Lancaster University; and Visiting Professor at the University of Bristol, raised thought-provoking issues around theoretical aspects of impact evaluation. Imelda Bates, Liverpool School of Tropical Medicine, focused on evaluations of capacity development impact.
On the second day of the conference, Philip Davies (ppt) from the International Initiative on Impact Evaluation (3ie) provided definitions of impact evaluation and highlighted a range of methods it employs. 
Session 3 had the participants work in small groups to reflect on challenges of designing evaluation for HE interventions for development and what research might be needed to tackle them.
During the two days of discussions, there was general consensus that HE investment is beneficial to the economy and society, but its development value, e.g. in reducing poverty in low and middle income countries, is less clear. Historically, investment has served many objectives, but not necessarily development.
Where development is an objective, HE investment lacks a convincing “theory of change”, particularly in how investments at the individual, organisational or institutional level will lead to development outcomes, and positive societal impacts. The relationship between investment in individuals and effects at the institutional and societal level is particularly poorly understood.
Evaluation of HE programmes has typically focused strongly on outputs, e.g. individuals trained to a certain standard, and very little on outcomes in terms of measurable development contributions from individuals or institutions, and even less on societal impacts. There has been more evaluation of training and scholarship programmes than of other kinds of intervention.
Evaluation has been developed independently for different kinds of HE interventions for development, and independently in different sectors (e.g. agriculture, health) and there has been little sharing of experiences, or efforts to compare the value of different approaches.
 Evaluation of HE for development has been strongly Northern-driven, associated with donor “value for money”, and Southern perspectives on “what works” are rarely sought.
The conference asserted that interdisciplinary and inter-sectoral working were very important for robust evaluation. We need to bring together not only the different disciplines, but also the different actors (e.g.  donors, practitioners, users). We need to use a common language (and plain English) in which to talk about evaluation, and need to create networks between academics and field-based workers. There is a need to be aware of the meta-purpose of funders, as often there are political agendas behind HE schemes that affect outcomes.

Additional resources:
Download the conference programme
To download speakers’ presentations, click on their names in the text above.
Download the Conference Summary and Recommendations two-pager