The word “evidence” conjures many emotions. In fact, I can almost hear the audible sigh as you read the title of this blog. But if you think about it we use some form of evidence every day in our lives.
Almost every decision I have ever made had some element of evidence (albeit sometimes questionable) contributing to it. For example, when I went out for dinner last week, I checked on trip advisor to see what other people thought of the restaurant. Whilst doing this I also assessed the quality of the evidence and considered whether I could trust it or not. One person’s view was not sufficient to convince me to go to the restaurant. However, since one hundred people or so had reviewed the restaurant positively, I took comfort that the overall rating was probably quite reflective of the restaurant’s performance.
DFID published the Multilateral Aid Review in March 2011 assessing the effectiveness of funding we give to international organisations like UN agencies. It is now the benchmark against which we measure the effectiveness of multilaterals. The MAR assessed the value for money for UK taxpayers of 43 multilateral organisations, with each organisation being assessed against a set of criteria ranging from transparency and accountability to good partnership behaviour.
DFID is doing a Multilateral Aid Review Update this year to assess the progress being made in those areas which were highlighted as reform priorities in the MAR 2011. For the Update we are using evidence in a similar way to my Saturday night dinner choice, albeit the sources and the quality of that evidence are much more intricate and rigorous.
The many sources of evidence we are using in the MAR Update include independent evaluations, evidence from DFID country offices, evidence from NGOs and evidence collected on country visits. The reason we collect so muchis that it means we can triangulate the various sources to allow for a concrete judgement of performance to be made based on the consistency of the information.
As we are using so many different kinds of evidence we also rate their quality. In this way we can easily assess which ones are more robust. For example, evidence from one source may be of poor quality meaning we cannot make a meaningful judgment from it about the performance of an organisation. The way to counteract this is to have this counterbalanced by strong sources of evidence.
Weak evidence is still useful because although we cannot make judgments solely from it, we can use it as one source when triangulating evidence across multiple sources. So one office alone may not be sufficient to judge the performance of a particular multilateral organisation, but if it is saying the same things as five other country offices the assertion becomes much stronger. If these assertions are also reflected in an annual report or an independent evaluation then the assertion is further validated. The more sources we use the more robust the evidence is and the more we can trust our assessments will be strong in the face of possible challenge.
To use the restaurant example again: if one friend tells me a restaurant is bad but doesn’t give me good reasons for this, it means that her evidence is of low quality. However, if I then ask three other friends and read a magazine review that also state that the restaurant is rubbish, well then my friend’s weaker evidence is still valuable in the triangulation process.
A lot of attention is paid to gathering as much strong evidence as possible. For the reasons given above, the more evidence we are able to gather the better our assessment of multilateral organisation performance can be. And these assessments are important because they then feed into decisions on whether the organisation is offering value for money for the UK tax payer.
By making sure the evidence we use is as robust as possible we are ensuring the money UK taxpayers contribute to UK development assistance is spent effectively, helping the most amount of people it can. Or alternatively viewed – a meal in the nicest restaurant you can afford on a budget.