https://dfid.blog.gov.uk/2013/01/22/are-we-having-any-impact/

Are we having any impact?

Does our aid have the impact that was anticipated? How much does it change the lives of the world’s poorest people and for how long? What works, what doesn’t and why? And, what could we do differently?

We conduct evaluations across our programmes to find answers to these questions to help improve the quality of our investments and to shape future programme design and implementation.  One example is the independent evaluation of a major community development programme in the DRC (the Tuungane Programme, Swahili for ‘lets unite’), with some interesting results.

Children in a school constructed by the community in Monaria in eastern DRC as part of the Tuungane programme. Picture: Susan Schulman
Children in a school constructed by the community in Monaria in eastern DRC as part of the Tuungane programme. Picture: Susan Schulman

The Columbia University evaluation team found that the programme has helped communities prioritise and manage the development of a range of vital local infrastructure projects (health centres, schools, roads, water points) that community members and local officials have widely praised. DFID staff and ministers have consistently been impressed by way the programme empowers communities to take charge of their own development, something that is at the heart of the Prime Minister’s Golden Thread narrative (which I will come back to in a future blog), and a central pillar of all new DFID programming in the DRC. International Rescue Committee (IRC) and its staff who work in challenging – and often dangerous – conditions have seen the benefits first-hand.

However, when the Tuungane programme was created after the 2009 peace settlement, it was designed as a post-conflict programme that would contribute to helping communities recover from years of conflict, strengthening coherence and ultimately governance. Yet disappointingly, the evaluators have found no evidence of a ‘Tuungane’ effect under the terms of the evaluation when compared to other communities in terms of social or behavioural changes.

It is still unclear why this key programme, so widely regarded, is not having the kind of change anticipated. So what has happened? Were we over-ambitious from the start? Was the design flawed? Is there another explanation for positive outcomes in control communities? The truth is that we don’t know yet and we are working hard to find out why; it is probably a combination of all of the above.

Knowledge is power

What we do know is that the findings of this evaluation are important not just for us but for international development efforts generally. We know that we need to share knowledge and learn from it. IRC and the DFID team are working together to proactively communicate what we have learnt and extract lessons to contribute to the improvement of this and other development programmes.

We need to learn to share these kinds of lessons without undermining the case for international development. There is often more to learn from failure than success and not everything that looks like failure should reflect badly on DFID, or development. Publicly showing that we are serious about continuous learning and improving the impact of our investments will increase our credibility and help us hold our heads high in the confidence that we are doing the very best with tax-payers money.

We need to continue to encourage our partners and other donors to study and publicise not just the stories of success but to also be prepared to talk about – and learn from – the bad news too.

We are not alone. Tim Harford, the ‘undercover economist’, published an article in The Financial Times on theTuungane evaluation last year in which he praised the willingness of those involved to commission a study of this type. His latest book, ‘Adapt – why success always starts with failure’, supports precisely this idea and in my mind, should become a key development primer.

It is tough for us to talk about failure. In my next blog I will come back to this by taking a look at what others in the development sector are doing to tackle this challenge.

4 comments

  1. Hannah

    Thanks Pete for your bravery in being open about when programmes or policies don’t do as well as we expect… I fully agree with you that we should be just as open about these instances as we are about sucesses. I wrote a similar post last year (see http://blogs.dfid.gov.uk/2012/04/creating-a-climate-in-which-were-able-to-fail/) but didn’t have a powerful DFID example like you did! Being open and transparent is the future we need to embrace now.
    Hannah

    Reply
  2. Shalini

    Great to read about ‘what does not work’ – a starting point for serious investment in success. I am particularly interested in and how you changed or adapted it as a result of the evaluation. Working in a post-conflict/ active conflict context myself , I am looking for examples of how to design programmes that contribute to peace building .

    Reply
  3. Nick

    Good article and message of addressing shortcomings. Do you have any preliminary thoughts on why Tuungane is failing to achieve community coherance and governance? And in what sense do you mean governance? You’d hope that if the communities are taking over responsibility for and control of local infrastructure planning and roll-out, then ‘governance’ would improve by default. If, on the other hand, they are acting in favour of their kith and kin then there would be a wider governance problem, as well as one of sustainbility etc. Thanks.

    Reply
  4. Pete Vowles

    Sorry for the delay, a few replies

    Hannah: thanks for your comments. One of the positive aspects of the Tuungane evaluation was that it has led to much more discussion, debate and scrutiny – as well as learning – than a more positive evaluation would have.

    Shalini: we made a number of changes as a result of the evaluation, including an increase in grant sizes, which was one of the key recommendations of the evaluators. It is important to point out however that by the time the evaluation results were published, Tuungane had already undergone significant changes and was in the early stages of implementing phase 2, which placed much more emphasis on governance and accountability issues, through the introduction of, for example, community scorecard methodologies.

    Nick: I would be specific and say that the evaluation failed to identify any difference between Tuungane and non-Tuungane communities according to key social and behavioural measures (these are specified on page 12 of the evaluation report). One example is ‘Individuals in Tuungane communities will exhibit higher levels of acceptance of others into their communities’. As Elisabeth King commented in her recent presentation based on a review of Tuungane and other CDR programmes (report coming soon), social science is not rocket science – it is in fact far more complicated in some respects because it is difficult to predict what will catalyse behaviour change at the level of the individual, the community and society. So there are no easy answers to the question that you pose on why the evaluation showed limited impact, but we will continue to seek them through monitoring and evaluation using a range of different methodologies.

    Reply

Leave a comment