Uncertainty: making decisions without all the evidence

In a guest post, RTI International's Matthew Jukes looks at how we can best make decisions without a conclusive evidence base

There is growing demand in international development for rigorous scientific evidence to show whether development policy is working. This includes an increase in randomised controlled trials (RCTs) and a call for clearer articulations of what constitutes rigorous evidence. In the face of uncertainty, if it’s a challenge to secure a ‘conclusive’ evidence base, when is it preferable to make a decision without all the evidence?

Imperfect evidence

Those inverted commas around ‘conclusive’ highlight one of the main difficulties in evidence-based decision making. For most of the questions we want to answer, the standard of evidence required is difficult to achieve – which is a fundamental obstacle to justifying interventions. To take literacy programmes as an example, it is possible to get robust evidence on whether a complex programme improves students’ reading ability, but it’s difficult to know for certain which parts of the programme – teacher training and incentives, textbooks, hundreds of classroom activities – were critical for its success. There is an understandable tendency for research to focus on pieces of the complex puzzle that can be isolated and evaluated separately. This means we often lack evidence as to how the puzzle pieces fit together and it can result in us only implementing the interventions that are measurable.

Any programme involving people (and therefore essentially every international development intervention) inherently has variability and complexities. A pilot programme can be successful, but may not work in the same way at scale, or in different locations, or at different times. Rigorous evidence of programme effectiveness in one place doesn’t always provide the basis for action in another.

Decision makers

Certainty is clearly elusive – and so, since most evidence bases fall below the high bar envisaged by the international development community, we have to think again about evidence that falls below this bar. This doesn’t mean lowering the bar, but thinking about how and when to act when the evidence is imperfect and decisions need to be made.

Evidence gatherers may increasingly be trained (albeit implicitly) to think like scientists – with rigorous, high-quality, and watertight evidence being the end goal – but still, stakeholders often need to make decisions based on imperfect evidence. Positing two extremes, there is the researcher who waits for more and more evidence, unwilling to make a recommendation while there is any uncertainty; on the other hand is the policymaker who gets frustrated by incomplete evidence and decides to trust their instinct instead. Neither is ideal for effective, evidence-based decision making, and the framework we discuss below can help stakeholders think strategically about what constitutes a good decision when there’s insufficient certainty.

When to act: consequences and uncertainties

The framework considers two elements in making a decision: consequences and levels of uncertainty. By being explicit about these when a decision has to be made, it is possible to consider the two, and know when to make a decision.

  • Consequences

As we have seen, there is always a level of uncertainty when a policy decision has to be made. There is also much variation in the consequences of that decision. While we may not be certain that a particular outcome will happen, it is important to think about the significance if it were to happen. The word ‘significance’ might be substituted by ‘value’ or ‘severity’, depending on whether it is a positive or a negative outcome. That is to say, is the potential outcome extremely beneficial or extremely deleterious - or somewhere on this spectrum.

Such consequences are already routinely taken into account in some sectors. In drug trials, for instance, potential, severely negative consequences would rule out some drugs from being part of a test, even if the chance of these consequences was relatively slim. That is to say, one considers the severity of the outcome as well as the quality of evidence on (and thus, the certainty about) whether the outcome will occur.

How can this principle be used more broadly in international development interventions? Some development programmes are – on the face of it – relatively benign. No one was ever injured from being taught phonics. In such cases, we may be willing to act on evidence that is persuasive if not conclusive. The flip side is that we need to work harder to rule out potentially negative outcomes. Even apparently benign interventions may have serious side effects. Education programmes can increase school-related gender-based violence or lead to a political backlash from teacher unions. Those who design evaluations often focus only on the intended benefit of the programme. Some brainstorming and holding focus groups with recipients before the programme begins could help identify and assess unintended negative consequences. If we could rule them out, we have greater confidence to act on the uncertain evidence for intended positive outcomes.

Other factors could make us more likely to act under uncertainty. Urgent decisions – for example, in humanitarian emergencies – often require action without solid evidence. To put it another way, in such situations there may be as much uncertainty about the outcomes of not acting as there is about taking action. We may also be persuaded to act on uncertain evidence if the evidence base is difficult to improve – possibly for ethical reasons or because the problem is so complex that it would require hundreds of studies to identify all the elements of a solution.

By incorporating an (evidence-based) understanding of broader consequences into the decision making process, it is possible to act on uncertain evidence with greater confidence. This approach provides decision makers with a way of assessing policy options that may not make it to the recommended list of (usually simple, discrete) interventions in ‘what works’ reviews but may offer better solutions to the problem at hand which can be tested.

  • Uncertainty

Uncertainty – the other element of this framework – is more commonly thought of as statistical uncertainty, represented by confidence intervals in impact evaluations. We need to move from thinking about uncertainty only in terms of impact estimates to considering any factor that could contribute to uncertainty about the outcomes of a decision.

For example, there is uncertainty about generalisability: is a successful intervention likely to succeed elsewhere, in a different context? Typically, little time and money is spent in assessing this type of uncertainty. Understanding the mechanisms by which a programme works, and the context that triggers those mechanisms can help us think about this kind of uncertainty.

Uncertainty may also result from methods used in evaluations. Much evidence falls ‘below the bar’ because it is not possible to evaluate a programme with an RCT or other causal design (and some authors estimate that this is the case for approximately 95% of international development inventions). In such cases, assumptions need to be made when attributing outcomes to the effects of a programme. We could do a better job of including such evidence in decision making if we make these assumptions explicit and collect data to estimate the likelihood of the assumptions holding.

This framework brings two elements into the process of decision making: uncertainties and their consequences. In the ideal situation, a decision maker considering a policy option would be provided with a list of the possible outcomes, the level of certainty we have about each outcomes, and the value or importance associated with each outcome. Some proposed or ongoing areas of work can help us reach this goal:

  • Evaluators can conduct work prior to an evaluation to identify potential unintended consequences of a programme. Analysts can be explicit about all the sources of uncertainty in decisions and the likely outcomes of these decisions, quantifying them (if only approximately) where possible.
  • Targeted monitoring and evaluation can reduce uncertainty in the elements of the programme that have the biggest consequences.
  • Data collection can test assumptions in research methods for evidence that falls below the bar.

This plan of action can reduce uncertainty in decision making, and help decision makers to be more certain about what to do in the face of it.

This work is a collaboration between Matthew Jukes, fellow and senior education evaluation specialist at RTI International, and Anne Buffardi, senior research fellow at ODI. A presentation and podcast of this paper are also available.

This is blog is a product of the joint CEDIL-CfE lecture series. CEDIL (the Centre of Excellence for Development Impact and Learning) is a DFID funded Centre which supports innovation in the field of international development impact evaluation. CEDIL is managed by Oxford Policy Management.

Areas of expertise