Why do we need diagnostics?

Photo of clay pots from above. The pots are filled with brightly coloured substances

Diagnostics represent a basis for policy that is specific to context, providing a systematic way of analysing causality and change. Umar Salam explains the need for complex diagnostics in policymaking.

Authors

Diagnostics represent a basis for policy that is specific to context, distinguishing between different sets of initial conditions and arriving at differential diagnoses; provide a systematic way of analysing causality and change; and the process of doing a diagnostic can effectively engage key stakeholders enabling empowerment and accountability.

Diagnostic projects are an important part of our work. Flagship projects, such as Economic Development and Institutions (EDI), Research on Improving Systems of Education (RISE) or Enhancing Diagnostics are all focused on the construction of new diagnostics – as is our own ‘Thicker Diagnostic’ described in December’s In Focus. Work of this kind is conceptually ambitious: RISE explicitly sets out to create a ‘paradigm shift’, yet also aspires to be practical in policy terms. This is a difficult but crucial balance to strike.

To appreciate the importance of context in diagnostics, requires some historical framing to recall the modes of development policy that diagnostics are often - though not always - a reaction against.  Although it may seem glaringly obvious that context matters, much of development policy has been premised on the opposite assumption – that ‘best-practice’ solutions can be applied in a top-down and technocratic manner to different situations, with little or no sensitivity for local conditions.  Such one-size-fits-all approaches typically emphasise targets, for example, macroeconomic stability or school enrolments, which may well be desirable in general, but which derive from very different underlying causes. Like Tolstoy’s unhappy families, development challenges are challenging in their own way. And the more universally framed the policy, the less room there is for local participation and agency, and thus legitimacy.

Differential diagnosis

Diagnostics, therefore, represent a conscious alternative to such approaches. The early ‘growth diagnostics’ of Ricardo Hausmann, Dani Rodrik and Andres Velasco (HRV) were an explicit repudiation of the Washington Consensus era Structural Adjustment Programmes of the 1980s and 1990s, which did indeed stipulate a more-or-less standardised menu of macroeconomic and fiscal objectives. The HRV diagnostic (2004) is based on the idea that whilst there might be many reasons why a country’s economy failed to grow, each reason would generate a different set of symptoms. Thus, by carefully sifting through the evidence, the analyst might be able to perform a differential diagnosis of which of the many factors that constrained the economy was the ‘binding constraint’ – the one which, if lifted, would lead to the most significant improvement. Is it under-investment in human capital? Or is it the poor state of infrastructure? Or is it a lack of available credit?  Or a low-tax revenue, etc? The diagnostic is a method for answering these questions, but in a way that depends entirely on local information. Whereas growth theory is about identifying general principles of economic growth of which countries are examples, growth diagnostics are about the countries themselves.

Causality and change

Looking a little deeper at what this method entails takes us onto the second fundamental feature of diagnostics – the analysis of causality between mutually interacting factors. Formally, the HRV diagnostic starts by specifying a growth model in which the constraints are represented as distortions. A mathematical result (the ‘theory of second best’) says that in the presence of multiple distortions, eliminating any one will not necessarily be welfare-improving. Because of these ‘second-best’ effects – due to the interactions between the constraints - we cannot just remove all the distortions simultaneously. Instead, we look for a method to identify the binding one, which is given by a ‘decision tree’ - essentially a set of nested questions, the first of which is why is the economy not growing? The secondary questions break this primary question down. In the original HRV model, one asks whether growth is constrained because the expected private return of asset accumulation is low, or because the cost of funds is high? These questions decompose in turn into two further questions: is the low return problem one of low social returns or one of low expected appropriability; or, alternatively, what lies behind the high cost of finance? Iterating several more times eventually produces the whole ‘decision tree’, at the base of which are relatively precise questions about individual constraints, and the different levels are related causally to one another in a specified way. The final step is a comparison of constraints (based on estimating their shadow prices – the changes in the objective function due to changes in the constrained input) from which one obtains the binding constraint. So, the guiding theme is one of change - the diagnostic can provide evidence as to which policy intervention will produce the greatest impact.

These themes are explored further in both the EDI ‘institutional diagnostic’ and OPM’s ‘Thicker Diagnostic’ albeit in different ways. As with growth diagnostics, both challenge a former orthodoxy – in this case, that ‘getting the institutions right’ is a good basis for development policy. The problem with ‘getting institutions right’ is twofold; first, the causality between ‘good institutions’ and development or growth is not straightforward as there are plenty of countries with ‘bad institutions’ that successfully ignited growth and developed institutions later; and second, due to the poor understanding of institutional change, we do not necessarily know how to ‘get institutions right’. This is evidenced by persistent failures to implement institutional reform or eliminate corruption, for example.  These failures suggest that if you want to improve economic performance, then it is first necessary to understand how economic decision-making is embedded in institutional and political economy structures – hence a context-specific diagnostic approach. Understanding how the ready-made garment sector has been driving Bangladesh’s economic growth for the past few decades, necessitates bringing together the complex network of relationships between politics and business that sustains the sector (‘competitive clientelism’) and the ways in which this institutional configuration affects other institutional areas, such as the judiciary, tax system, banking and land.

Heuristic approach

What the institutional and thicker diagnostics do is provide (different) methods for collecting and then analysing evidence on these institutional and political economy factors. Where they differ from growth diagnostics is that they are (more) heuristic; there is no direct analogy of the growth model or the decision tree. There is no standard way to determine which institutional areas are most important, nor how to gather information on them – both diagnostics rely on a range of different sources and inter-disciplinary methods, including those from outside economics. Inevitably there is a trade-off between the ‘predictive’ quality of the growth diagnostics and the richer or ‘thicker’ (the term is a reference to the work of anthropologist Clifford Geertz) analysis of both context and causality. The key challenge is the inter-connectedness of different factors, but these interactions are even more complicated, and causality typically works in both directions. For example, we know that political power can be used to shape the form and functioning of institutions, but also, reciprocally, that the functioning of institutions confers political power – an important example of a ‘feedback loop’. We know that formal institutions (laws, electoral rules and contracts) and informal institutions (social norms, voting strategies, forms of market exchange which do not rely on standardised rules and regulations), do not substitute for one another but co-exist. And we know that the de jure operation of institutions in principle is very different from the de facto operation in practice. Hence economic change is unlikely to be achieved by policy reform alone and is constrained by political economy and social factors which achieve their own equilibrium. A crucial prerequisite for changing any system is an understanding of why that system has arisen in the first place and why it is stable now. Where a diagnostic may help is in seeing how feasible, achievable improvements in one institutional area may lead to much greater improvements elsewhere, and subsequently to positive reform in the system as a whole.

The RISE diagnostic

The RISE diagnostic is radical in nature and challenges an orthodoxy. The programme starts with the observation that despite years of steadily rising primary school enrolments (following the adoption of the Millennium Development Goals), learning outcomes for many children remained poor, even though those children were enrolled in school.  Moreover, easily observable features of the inputs of the schooling system, such as expenditure per student, education of teachers, or average class size—while at times statistically significant — explained very little of the observed variation in learning outcomes. (Glewwe and Muralidharan RISE WP 15/001). This suggests that policy focused on identifying an optimal set of inputs may be misguided – that there is no ‘one-size-fits-all’ solution. Instead, the RISE diagnostic takes a country-specific, ‘systemic’ approach, which first conceptualises the education system of a country – as a network of students, teachers, schools, regulators and ministries embedded in the political economy context – and then hypothesises why this system may or may not work well. Specifically, it focuses on ‘relationships of accountability’ between agents in the network, which may be mediated in a number of ways, such as supplying finance or information, or in ‘delegating’ or ‘motivating’. The key idea of the diagnostic is to ask whether these relationships of accountability are ‘coherent’ – consistently aligned with one another and in themselves – and the underlying hypothesis is that the system works well when there is a coherent flow of accountability overall. An incoherence might manifest itself within a particular relationship of accountability, such as when a Ministry delegates an ambitious objective to schools, but then provides inadequate finance to achieve it. Or it might be between two different relationships of accountability, such as when the information on schools performance that is provided across one relationship (e.g. budgetary information or exam results provided by schools to ministries) paints a different picture to that from another (e.g. the day-to-day experience of pupils communicated to their parents). Understanding when and where relationships are incoherent could be used to identify entry points to change. But the real insight of the diagnostic is to show how complex, structural aspects of causality can be addressed in a systematic way that is specific to context. In the case of RISE, the individual country studies are not only sited in a diverse range of countries, but concern very different aspect of the education system in those countries. The theory needs to be fitted to the context, not the other way around.

Enabling diagnostics

Finally, diagnostics do not belong purely to either research or policy, but to the nexus in between. A good diagnostic is not simply an academic analysis nor is it only a policy tool - it should have characteristics of both, each of which should enhance the other. The research in a diagnostic should be used to interrogate policy, to ensure it's evidence-based and pragmatic rather than populist; but policy needs should also guide that research, keeping it relevant, problem-driven and not overly academic.

Done well, diagnostics should be conceptually innovative and yet also practical. Most of all they should deliver impact. But to deliver impact requires engagement with key stakeholders, an understanding of demand and a sense of political feasibility. Part of the design of a diagnostic should centre on engagement and a consideration of how the diagnostic, through its method and the dissemination of findings, enables agents to bring about change, and furthermore to understand what ‘change’ means in a local context and what the barriers might be to achieving it. Achieving change in the sense of stronger institutions or better learning outcomes is a different type of challenge to increasing the growth rate or primary school enrolments.

This distinction is what Lant Pritchett terms ‘an enabling diagnostic’ – one which positively recognises the autonomy of agents and the ‘thick’ nature of the problem to be solved, as contrasted with a ‘prescriptive’ or ‘logistics’ diagnostic in which the problem is defined in ‘thin’ and measurable ways, and the agents have a well-specified set of instructions in order to achieve their targets. The danger, in applying a diagnostic approach, is that one follows a route that is no less technocratic than earlier models, as even if the diagnostic itself is designed to incorporate local knowledge, the structure into which that knowledge is incorporated is still determined and imposed from the outside. There are various practical steps that can be taken to avert this danger – using local partners whenever possible, pursuing iterative strategies and adaptive methods to take account of these iterations, and being flexible in how one applies the diagnostic in the first place – but there is no unique way to take these steps. That is why we at OPM place such a premium on learning from our experiences, so that we can continue to improve upon the design and implementation of diagnostic approaches to development challenges in the future.

Area of expertise