Does our aid have the impact that was anticipated? How much does it change the lives of the world’s poorest people and for how long? What works, what doesn’t and why? And, what could we do differently?
We conduct evaluations across our programmes to find answers to these questions to help improve the quality of our investments and to shape future programme design and implementation. One example is the independent evaluation of a major community development programme in the DRC (the Tuungane Programme, Swahili for ‘lets unite’), with some interesting results.
The Columbia University evaluation team found that the programme has helped communities prioritise and manage the development of a range of vital local infrastructure projects (health centres, schools, roads, water points) that community members and local officials have widely praised. DFID staff and ministers have consistently been impressed by way the programme empowers communities to take charge of their own development, something that is at the heart of the Prime Minister’s Golden Thread narrative (which I will come back to in a future blog), and a central pillar of all new DFID programming in the DRC. International Rescue Committee (IRC) and its staff who work in challenging – and often dangerous – conditions have seen the benefits first-hand.
However, when the Tuungane programme was created after the 2009 peace settlement, it was designed as a post-conflict programme that would contribute to helping communities recover from years of conflict, strengthening coherence and ultimately governance. Yet disappointingly, the evaluators have found no evidence of a ‘Tuungane’ effect under the terms of the evaluation when compared to other communities in terms of social or behavioural changes.
It is still unclear why this key programme, so widely regarded, is not having the kind of change anticipated. So what has happened? Were we over-ambitious from the start? Was the design flawed? Is there another explanation for positive outcomes in control communities? The truth is that we don’t know yet and we are working hard to find out why; it is probably a combination of all of the above.
Knowledge is power
What we do know is that the findings of this evaluation are important not just for us but for international development efforts generally. We know that we need to share knowledge and learn from it. IRC and the DFID team are working together to proactively communicate what we have learnt and extract lessons to contribute to the improvement of this and other development programmes.
We need to learn to share these kinds of lessons without undermining the case for international development. There is often more to learn from failure than success and not everything that looks like failure should reflect badly on DFID, or development. Publicly showing that we are serious about continuous learning and improving the impact of our investments will increase our credibility and help us hold our heads high in the confidence that we are doing the very best with tax-payers money.
We need to continue to encourage our partners and other donors to study and publicise not just the stories of success but to also be prepared to talk about – and learn from – the bad news too.
We are not alone. Tim Harford, the ‘undercover economist’, published an article in The Financial Times on theTuungane evaluation last year in which he praised the willingness of those involved to commission a study of this type. His latest book, ‘Adapt – why success always starts with failure’, supports precisely this idea and in my mind, should become a key development primer.
It is tough for us to talk about failure. In my next blog I will come back to this by taking a look at what others in the development sector are doing to tackle this challenge.