How to successfully adapt impact evaluations during covid-19

Four key learnings from our evaluation of the UNICEF managed Mwanganza Mashinani pilot project

Authors

30-second summary

  • Impact evaluations cannot simply adapt to covid-19 by adjusting the way in which data is collected. A comprehensive redesign is necessary.

  • Valuable evidence on programme impact can and should (where appropriate) still be gained by rethinking the evaluation scope, survey mode and timeframe.

  • Additional insights into the effects of covid-19 can be gathered to inform swift evidence-based policy making.

The covid-19 pandemic has drastically altered the way programmes are implemented as well as the evidence needs of governments and donors backing these programmes. Such a large change to the evaluation context means that impact evaluations cannot simply adapt to covid-19 by adjusting the way data is collected. There is no quick or easy fix. Rather, impact evaluations need to be comprehensively redesigned by re-considering their research questions and analytical objectives as well as their data collection methods and results dissemination. Here are four key learnings from our evaluation of the UNICEF managed Mwanganza Mashinani pilot project, which aims to enhance energy access for vulnerable segments of Kilifi and Garissa’s populations, in Kenya:

1) Decide whether the evaluation should go ahead.

The overarching principle guiding our evaluation decisions has always been to guarantee the health and safety of programme beneficiaries and our partner-organisation staff. The IEG discuss this from an ethical perspective in the times of covid-19. If the risk to individuals is low, there remains a strong argument to continue with an adapted evaluation.

Results from 3ie’s survey of impact evaluation researchers in Africa stress that the evidence gathered through impact evaluations remains crucial in the short term to understanding the effect of the pandemic on individuals and households as well as which interventions help mitigate the negative impacts of the crisis work. In the medium-term, it’s important to gather evidence on programmes to understand their intended or unintended impacts on beneficiaries and whether they should continue, be interrupted or, in case of pilots, scaled up when the situation returns to a new normal.

2) Rethink and expand the research scope, taking into account the changing needs of beneficiaries, local government and donors.

This requires revisiting all aspects of the evaluation approach as new public health rules and social and economic restrictions imposed by governments may change the problems and priorities of beneficiaries. A new short-term objective is to gather real time evidence on the effects of the pandemic in geographic areas covered by evaluations that are relevant for policy makers. This includes assessing covid-19 awareness and knowledge, as well as economic conditions and coping strategies of individuals and households. The medium-term objective is to ensure that key learning for ongoing programme implementation and scale-up can take place. Finally, longer-term learning on both covid-19 and programme impact can be included as additional stages and added to the evaluation as trends emerge.

Considering these revised objectives and timeline, the research scope of the original evaluation design phase should be revisited to determine if the focus of the research remains relevant to policy makers and whether additional research questions have emerged. In the case of the Mwangaza Mashinani evaluation, the value of gathering robust evidence during the pandemic was clear and made the case to adjust to evaluation’s research scope and objectives:

  • The evaluation research redesign process took place during the early onset of the pandemic, when questions around the efficacy of awareness raising and people’s knowledge of covid-19 were important to answer.
  • Stakeholders involved in the implementation of the programme were still interested to understand whether recipient households were benefitting from the intervention, more than one year since the beginning of the pilot project.
  • Finally, there was also strong interest in understanding whether households used the solar devices they received as part of the programme to cope with the shock.
3) Exploit the potential of remote data collection.

The shift in survey mode, from in-person to remote, is a crucial part of this pivoting exercise. Remote surveys are not a perfect substitute for in-person data collection, as discussed in this blog by the World Bank, but do provide a robust means to collect valuable data and help guarantee health and safety of survey respondents and staff. Although there is an increasingly large pool of resources available on remote data collection (for example, by IPA here), we believe that is worth highlighting a few lessons emerging from our own recent experience:

  • Drastically reduce the number and type of questions asked. It is important to focus on questions that lend themselves to being asked on the phone and to prioritise data that needs to be collected now to address the revised research objectives. This can include data on programmes that will facilitate covid-19 adaptations or short-term expansions and covid-19 related indicators that could immediately inform the policy response, such as awareness of covid-19 mitigating measures or indicators of children’s home learning activities.
  • Invest in appropriate survey management to improve data quality and response rate. Telephone based surveys result in a much larger number of call-backs compared to the number of revisits undertaken as part of in-person surveys. This means that even modest sample sizes can become difficult to manage. Using a specialised Computer-assisted Telephone Interviewing (CATI) software allowed us to achieve over 80% response rate in the case of Mwangaza Mashinani, while ensuring that our remote survey was based on best practice.
  • Link remote surveys to previous and/or subsequent in-person surveys. If the remote survey follows an in-person baseline survey carried out in the past, the samples covered by the two surveys should be comparable to allow for the analysis of over-time trends between the two. Similarly, to ensure that the evaluation continues to achieve its stated objectives, follow-up rounds of in-person data collection should be planned for and take place when it is safe to do so. This will also enhance the richness of the evidence gathered as part of the evaluation. The Mwangaza Mashinani endline in-person survey was replaced by a remote mobile phone one, which became a midline survey, within a revised evaluation timeline.
4) Tailor the evaluation outputs to the revised scope of the evaluation.

Adjusting the type of data that is collected should be accompanied by re-considering the ways in which results are presented to different audiences. This will ensure that different evaluation outputs speak to the different objectives of the expanded research. It is crucial that the evidence produced is useful for, and accessible to policy makers.

For the Mwangaza Mashinani evaluation, this resulted in deriving two separate outputs from the remote survey: firstly, a short policy brief presenting evidence on covid-19 only; and secondly, a full-fledged evaluation report on the impact of the pilot project and on the efficiency of its implementation approach. This way, each research objective is addressed by its own dedicated output and reporting on covid-19 indicators is prioritised to make sure that evidence is available rapidly to feed into the responses.

Looking ahead

By rethinking the research scope, survey mode and evaluation timeframe in consultation with all stakeholders involved, valuable evidence on programme impact can still be gained whilst additional insights into the effects of covid-19 can be gathered and shared with policy makers to maximise usefulness. This need for adapting impact evaluations (and any other research based on primary data collection) to these unique and rapidly changing circumstances will continue to remain relevant for the foreseeable future. As long as our understanding of the most effective and innovative ways to adapt our designs and methods continues to grow in parallel, crucial learning will not be lost and policy making can continue to be informed by evidence-based recommendations.

Dr Michele Binci is the Team Leader and Alex Doyle is the Project Manager for the Mwanganza Mashinani evaluation. Feel free to reach out directly to Michele (@email) and Alex (@email) if you want to know more about the project and/or the evaluation.

Areas of expertise