New command enables users to cover all design and sampling needs for conducting robust impact evaluations
Impact evaluations are an important tool for international development as they generate evidence on whether development interventions work. In October 2019, Abhijit Banerjee, Esther Duflo, and Michael Kremer were awarded the 2019 Nobel Prize in Economic Sciences for their experimental approach to alleviating global poverty – using randomised control trials to generate robust evidence to guide public policy reforms.
To adequately assess changes that can be attributed to a particular policy or a programme, impact evaluations compare the results from one group that benefitted from an intervention, with those in another group, which was not part of the same intervention.
Determining an optimal sample size for both groups is crucial for designing the most effective impact evaluations. This is normally done using a power calculation method, which determines a sufficiently large sample to confirm any findings with statistical confidence.
We have extensive experience and expertise in designing and implementing quantitative impact evaluations, ranging from randomised control trials to complex quasi-experimental designs based on different analytical approaches. We also often use stratified and cluster sampling and run our impact analyses across multiple rounds of data collection. This complexity in design and sampling needs to be considered when determining the sample size, but there is no power calculation tool that would allow us to fully achieve this. To bridge this gap, we developed a new power calculation command.
Our new command allows users to choose which output they would like to determine, out of the three crucial ones related to a power calculation:
- Minimum detectable effect – this helps to determine the relationship between the impact and effort, such as cost or value, of the project. It represents the minimum improvement that can be detected with a statistical significance, without incurring false positive or negative errors.
- Power – this helps to determine the power of a chosen model, given a certain sample size and assumed effect of the intervention. It enables the user to reject the null hypothesis when an alternative hypothesis is true.
- Sample size – this helps to determine the optimal sample size within the scope of the above parameters. It refers to the number of units (e.g. individuals, households, pupils, schools, etc.) that will need to be sampled and used in the analysis to assess a potential impact of the project.
This is an improvement over most commands, which typically allow users to only calculate sample size and either power or minimum detectable effect. In addition, users can decide to calculate the above outputs for either proportions (e.g. proportion of pupils in a performance band) or means (e.g. average number of pupils); and tailor their power calculations to the type of impact evaluation design (or other quantitative data analysis settings) that they are employing.
Our command allows users to account for a larger and more comprehensive range of design parameters than most available power commands, including difference-in-differences and regression discontinuity settings, clustering, inter-temporal correlation, variance, and finite population corrections, amongst others.
The new command was designed and developed by Michele Binci, Paul Jasper, and Virginia Barberis. Listen to Michele and Paul explain the importance of the tool and provide a practical example of its use.
Discover more about our evaluation of the Education Quality Improvement Programme in Tanzania (EQUIP-T), and the evaluation design used to analyse the impact of the programme. Click here to download the tool (Stata software is needed), command helpfile, and short guidelines on how to use it.
Note: The power command tool is still in a beta version and the authors welcome any comments or suggestions for improvements. Please contact Michele ([email protected]) or Paul ([email protected]) to share your feedback.