Improving efficiency of programme evaluations with data science

Exploring how artificial intelligence and machine learning can improve project evaluations

Project team members

We conducted a literature review exploring how the use of text analytics, artificial intelligence, and machine learning techniques, including natural language processing, can be used to improve the processes for implementing programme evaluations.

This particular work was focused on identifying how these techniques could scale up the processes used by the Global Affairs Canada (GAC), and presenting options for testing identified approaches in practice.

The challenge

GAC regularly carries out evaluations of large programmes that span a substantial number of projects, across a large set of countries, several years of implementation, and involve a significant amount of funds. For example, the Canadian Maternal, New-born, and Child Health (MNCH) programming (MNCH 1.0 and MNCH 2.0) covers almost a decade of funding across 900 projects, with over CAD$3 billion funding. To implement evaluations of such large programmes, GAC currently manually reviews and analyses large samples of project documentation. Reviewing the full set of project documentation is not feasible, given the amount of text stored in these corpuses. For instance, by one estimate provided by GAC, reviewing all project documentation related to the MNCH programmes would take over 1,000 years of one reviewer’s time.

The objective of the project was to identify analytical methods that could help to improve the efficiency with which such evaluations could be implemented, both when selecting projects to be included in the evaluation and when reviewing selected documents.

Our approach

We reviewed the current process whereby GAC implements programme evaluations and identified three main steps where modern data science methods could most usefully be applied:

  • the selection of projects to be included in programme evaluations
  • the selection of documents to be included in the review of the identified projects
  • the coding and analysis of document reviews.

By reviewing relevant literature and descriptions of pilot projects being implemented in policy contexts, we suggested appropriate methods that could be tested with respect to each of these steps.

Outcomes and wider impacts

Within a final report, we provided recommendations for methods that could be used in the context of GAC programme evaluations. These covered a variety of approaches, including topic modelling, sentiment analysis, information retrieval methods (e.g. Word2Vec, and named entity recognition). The report also covered data pre-processing steps that would be required to test this, including automatic processing of imagery and PDF files, and infrastructure requirements. Finally, we suggested a way forward in terms of how these methods could be piloted and tested.

 

Area of expertise