How can development donors be sure they’re getting their money’s worth?

How can development donors be sure they’re getting their money’s worth?

OPM authors/contact
Alex Jones
Type
Opinion
Date
February 2016

How do donors in international development make sure they are not wasting money? This is harder than it may initially seem. Money is spent thousands of miles away from headquarter offices, in settings where information is poor, politics complex and staff turnover rapid.

When mistakes are made, poor uses of public funds are exploited by the international media, putting pressure on governments to cut their aid budgets. Moreover, and more importantly, the missed opportunities to spend aid well represent huge losses in potential improvements to people’s lives.

But what does good spending look like? It’s not always obvious.

Any donor expenditure involves a trade-off. The finite money and resources available mean decisions have to be made over which areas to invest in, and which not to. When donors spend money in Sierra Leone, for example, they have less money to spend in Ethiopia. When donors spend money on health, they have less money to spend on education.

These are the opportunity costs of donor spending – the forgone benefits of not choosing alternative courses of action. If money is spent well, those benefits are smaller than the benefits actually achieved. If money is wasted, however, the reverse becomes true. In the international development sphere, the picture is complicated further by two significant conceptual challenges that donors face:

1. There is no established social welfare function that captures benefits across numerous sectors – and donors generally work in numerous sectors. What metric should be used to compare the benefits of a programme to develop private sector competition in Malawi to the benefits of a programme to improve maternal health in Sierra Leone?

2. In order to understand opportunity costs, donors need to examine the counterfactual – what would have happened instead? In contexts with abundant data this is a practice of statistical trickery and technique. In low and middle income countries, this can be mere speculation.

In practice, many donors make use of Value for Money (VfM) analysis to assess whether they are wasting their money and justify spend to taxpayers back home. Donor VfM analysis is often based on a reasoned trade-off between the three Es – economy, efficiency and effectiveness.

Economy reflects the extent to which inputs are bought at competitive prices – for example, could textbooks be bought cheaper elsewhere? Efficiency reflects the extent to which these inputs are converted into outputs – for example, how many text books are being bought or how much is being spent on text books per child now using a textbook? Effectiveness reflects the extent to which the outputs are generating outcomes – for example, are children learning more as a result of these textbooks? The idea is that programmes with poor VfM are identified, and that this is addressed either through resolving the problems or cutting the programme.

In my view, however, donors could go even further, and generate more useful information to inform their decisions. Along with the International Decision Support Initiative (iDSI), we compared the recently published Reference Case (RC) for Health Economic Evaluations in low and middle income countries to the UK Department for International Development’s (DFID) VfM analysis methodology. We asked if the VfM methodology could learn lessons from the RC for economic evaluation, and these were our main conclusions:

Be more transparent

What is actually being asked?

DFID has made significant progress in some aspects of transparency. In many cases it is now possible to know how much has been spent on what programme and what the objectives were. It is less easy to find out what the results were, and we identified some important questions that need addressing:

Is DFID concerned with the VfM of its expenditures only? Of all aid? Of all social expenditure? Of total expenditure? The answer determines the scope of relevant evidence. At present, the perspective from which DFID is conducting its analysis is unclear. Moreover, it is not clear at what point a programme becomes good or bad VfM, other than through a reasoned assessment of the three Es – leaving all up to the judgement of the evaluator with no standardised checks for consistency.

More broadly, DFID’s VfM analysis appears to focus on technical efficiency rather than allocative efficiency. It will identify if a programme could produce the same outputs at a lower price, or more outputs at a given price. For example, is a programme buying branded drugs when it could be buying generics? It will not, however, identify if a programme is targeting the socially optimal outcomes. For example, is a programme focussing on HIV/AIDS when the main problem is malaria? Allocative efficiency analysis is needed for this. DFID should clarify the distinction between its approaches to technical and allocative efficiency.

Assess all the relevant evidence

Currently, DFID’s VfM analysis is evidence-informed, but what evidence is most relevant is not explicitly prescribed. This begs a set of questions around, among other things, the types of benefits and costs that should be included in the analysis, how they should be compared and the timeline over which they should be considered.

As a general rule of thumb, to ensure a comprehensive assessment of opportunity costs, all costs and benefits that differ between alternative uses of an investment should be evaluated. This includes future costs and benefits as well as indirect and non-financial ones. A key non-financial resource that’s often overlooked is the human resource stock needed to implement donor funded programmes.

All assessments should start with the identification of the information needed to inform the overall decision. If this information turns out to be unavailable (a lack of household survey data measuring school attendance, for example), analysis should highlight this, as well as the assumptions made in its place. Basically, the infeasibility of data collection should not mean the irrelevance of that data.

Target group compositions matter

Under DFID’s current methodology, the composition of target populations may be ignored altogether. For example, it may be clear that a reproductive and child health programme is targeting women and children, but what do we know about the characteristics of these women and children? Are there income divides? Religious divides? Racial divides? Geographic divides? Donor programmes target diverse groups of people. It may be that a programme is poor VfM overall, but good VfM when targeted at specific sub groups. Information on the composition of target populations is also useful when considering equity implications, scale-up of programmes, or replication elsewhere.

Capture equity implications

Currently, there is no widely accepted methodology for incorporating equity considerations into resource allocation decision making. If equity is to play a significant part in the distribution of DFID funds, there needs to be explicit guidance on how it should be monitored.

One potential method could be to combine the evaluation of target population compositions with equity implications. The outcomes could be presented disaggregated by the target population sub-groups. This would help to identify which groups within a programme’s target population are benefiting most, and which are being overlooked or left behind.

Prepare for uncertainty

The low quality and quantity of data used in analysis of DFID funded programmes means that any conclusions are associated with significant uncertainty. Not engaging with this may mean that too much weight is given to recommendations that we are not sure about, or evidence is ignored because it is considered too poor quality and we don’t know how to adjust for this. Characterising uncertainty is a step towards making the most efficient use of the limited data available.

One way to do this could be to conduct deterministic sensitivity analysis on key variables and assumptions. This means holding all bar one variable constant, and recording the change in conclusions as that one variable is adjusted. While far from being a perfect solution (with genuine methodological concerns), it may highlight where unexpected events are likely to have the most damaging results and, in turn, where uncertainty is most important.

Planning for the future

DFID’s VfM analysis tends to be retrospective. This is common for analysis of technical efficiency, which focuses on how well money is being spent.

One benefit of retrospective analysis is that it is attempting to analyse what has actually happened, rather than what may happen given a sample. With the high levels of uncertainty, this probably leads to more accurate information on both actual costs and benefits. On the downside, by the time poor VfM is noticed, it has already been bought. To limit this damage, there needs to be smooth links between planning for the future and VfM analysis.

A step in the right direction

DFID’s VfM analysis is a step in the right direction. It is possible to find out how much has been spent on a programme and what its objectives were. The next step is to better understand the results, and appropriately compare the costs and benefits. Practical next steps in achieving this are to clarify what evidence is necessary when conducting VfM analysis and to highlight where information is missing, assumptions are made and what the potential implications of the uncertainty are. Putting these considerations at the forefront of VfM analyses will make the conclusions more informative, which may in turn improve decision making by development donors – helping them to make the most of the money they have.