Skip navigation

About the Monitoring and Evaluations team

Monitoring and evaluation is essential to understanding what works and why in policies and programmes designed to reduce poverty and disadvantage.

Paul Jasper

Senior Consultant

OPM United Kingdom

Paul Jasper is the data innovation lead in the Data Analytics Portfolio at OPM and works on designing, implementing, and managing quantitative impact assessments and data analytics projects along the entire data cycle.
Nils Riemenschneider is the Lead M&E Systems Unit and has worked for 18 years   on surveys ,  monitoring, and evaluation. He has extensive experience in the design and implementation of monitoring and evaluation systems.

Michele Binci

Principal Consultant

OPM United Kingdom

Michele Binci is a Principal Consultant who has many years of experience working on impact analysis across several thematic fields, including poverty and food security, schooling and child labour, health and nutrition, gender, and migration.  He manages the Quantitative Measurement and Impact Estimation team, and chairs the Quantitative …

Fernanda Carneiro


OPM United Kingdom

From a multidisciplinary background in law, economics, and management, Fernanda Carneiro has strong experience building monitoring and evaluation systems in challenging contexts. Having previously worked with NGOs and local partners, she has supported the development of monitoring and evaluation systems in a range of different types of projects in …

Emma Jones

Principal Consultant

OPM United Kingdom

Emma Jones is principal consultant and voice, empowerment, and accountability (VEA) hub lead. Emma has extensive experience of managing, evaluating, and supporting VEA programmes. Emma also has expertise in gender and inclusion, community engagement, social norms, and behavioural change, nutrition and, social protection.

The M&E portfolio is organised in four hubs around our core areas of work.

Independent evaluation

We have considerable experience in carrying out all types of independent evaluations across our areas of sectoral expertise, and for a variety of donor and government clients. To maximise the impact of our work, we aim for our evaluations to:

  • deliver credible and useful recommendations to inform policy;
  • involve beneficiaries and other stakeholders throughout the evaluation process;
  • incorporate high-quality designs based on rigorous quantitative and qualitative methods, selected in accordance with the objectives and context of the evaluation; and
  • communicate findings effectively to relevant audiences.

We pride ourselves in adopting current good practice in our evaluations. The ability to draw on a toolkit of designs and methods, as well as to innovate based on a sound understanding of established techniques, is important in undertaking high-quality, influential evaluations—particularly in traditionally under-evaluated policy areas or of complex programmes. We have wide-ranging experience in conducting process, impact, and economic evaluations at the appropriate stage of policy and programme development and in addressing the internationally recognised OECD DAC criteria in evaluating the use of aid money.

Quantitative impact measurement

We are particularly recognised for our in-house expertise in quantitative impact measurement techniques. We have implemented over 30 rigorous quantitative and mixed-methods impact evaluations across multiple sectors worldwide. We have a strong record in designing and implementing experimental and quasi-experimental impact evaluations to understand the effects of interventions on particular groups, whether positive, negative, intended or unintended.

Our quantitative impact evaluations often rely on primary data collection through large-scale sample surveys, a core strength of OPM. We have the capacity to undertake all stages of survey design, implementation, analysis, and dissemination, working closely alongside staff in our international offices to carry out fieldwork and quality assure data.

  • Experimental methods: when feasible and appropriate, we use experimental methods (i.e. randomised control trials) to provide statistically robust measures of programme impact. If implemented correctly, randomising which individuals or communities will be affected by the policy creates two groups that are by construct statistically identical to each other. Under a few assumptions, any subsequent differences can therefore be attributed with a high degree of confidence to effects of the intervention. We have undertaken experimental impact evaluations in the fields of education, social protection, nutrition, and WASH.
  • Quasi-experimental methods: When it is not feasible or desirable to randomise, we use quasi-experimental methods. Broadly speaking, these methods use techniques to construct a comparison group that statistically resembles the treatment group as closely as possible. Depending on certain assumptions, these designs can also provide an unbiased estimate of the net impact of the intervention. We have experience in propensity score matching, regression discontinuity, difference-in-difference, and interrupted time series techniques.
  • Mixed-methods approaches: We often find it useful to combine quantitative impact analysis with qualitative research methods to help understand the 'how' and 'why' questions and to complete the picture by addressing gaps that quantitative research methods typically cannot fill. Indeed, almost all of our experimental and quasi-experimental evaluations have been mixed-method studies.

Internal M&E systems

We have extensive experience in designing and implementing programme and project M&E systems, including M&E frameworks, monitoring, management information systems (MIS), and internal evaluation functions. We provide M&E systems for both government institutions as well as donor-funded programmes operating outside government. Our experience shows that successful M&E systems concurrently address supply- and demand-side factors, ensuring the provision of reliable and quality information as well as the actual demand for and use of that information to support core programme and government activities (including internal and external accountability) by actors at different levels.

There is a growing demand for the services of the M&E systems unit. This success has come as a direct result of our focus on the usefulness of M&E for implementers and the opportunity to draw on our wide experience, and to identify key aspects of impact-oriented M&E. To date, the M&E systems unit has worked in more than 20 countries worldwide.

Research uptake (ResUp)

We have established a ResUp hub to facilitate learning and build collective capacity around research uptake across the organisation. Research uptake aims to improve the impact of our work and ensure that the evidence we produce is understood and used by policymakers and other stakeholders. Planning and implementing better research uptake strategies and systematically monitoring the results will enable us to learn about the impact and influence of its work.

The ResUp hub enables the management and sharing of knowledge, and acts as a source of expertise, developing capacity across the organisation in relevant skills and providing the higher order specialist skills, conducting research and market analysis on research uptake and data visualisation. The ResUp hub promotes research and innovation, and provides support to projects and teams on research uptake and data visualisation.