Supporting evaluation at USAID through a global research network

Lowering the barriers for researchers to collaborate directly with those who set development policy means innovations reach the field sooner.

September 4, 2017
Jessica Wells
AidData Geospatial Assistant Qiao Li leads an Intro to ArcGIS training session during the first AidData Research Consortium (ARC) convening in January 2014.

AidData Geospatial Assistant Qiao Li leads an Intro to ArcGIS training session during the first AidData Research Consortium (ARC) convening in January 2014. Photo by AidData, all rights reserved.

Editor’s Note: This post is the fourth in a series taking a look back at some of the lessons learned and accomplishments of the AidData Center for Development Policy (ACDP) over the past five years funded by the U.S. Global Development Lab under USAID's Higher Education Solutions Network Program. The ACDP is a partnership between AidData at the College of William & Mary, Development Gateway, the University of Texas-Austin, Brigham Young University, and Esri. USAID has granted a no-cost extension through September 30, 2022 to enable continued collaboration between the AidData Center for Development Policy and USAID units. 

Under what conditions does foreign aid work?

It’s a $144 billion-dollar question. (That’s the amount the OECD estimates its members spent on development assistance in 2016). In recent years, development workers have been pushing the field to bring more rigor to bear in evaluating policies and programs for evidence of impact; 2015 was even declared the International Year of Evaluation.

While most development organizations now use some form of performance evaluation, these evaluations frequently rely on easily-collectable data, like the number of schools built or bed nets distributed, and often fail to collect data on control locations. Using such low-hanging fruit helps make evaluations cheaper, but absent sufficient information on both treated and non-treated locations, little can be said of the actual impact experienced by project beneficiaries.

The challenge with more rigorous evaluation methods is that they historically have required more time, effort and funding — all scarce resources. Given that every dollar spent on evaluation is a dollar that won’t be spent on school construction or bed net distribution, how can we encourage large development organizations to rethink their evaluation policies? One solution is to actively engage university researchers who are developing cutting-edge methods that can lower evaluation costs and reduce time, while still maintaining high standards of rigor.

Research, meet policy

Recognizing the gap between current levels of academic engagement with the policy community and a growing desire to increase the use of impact evaluations, AidData’s cooperative agreement with the U.S. Global Development Lab at USAID created new channels that lowered the transaction costs for USAID to collaborate with academic researchers. The AidData Research Consortium (ARC) now works with 120+ scholars from over 50 universities and institutions around the world to study how foreign assistance is targeted and evaluate its impact, turning their insights into practice with development organizations in the field.

With AidData’s cooperative agreement breaking down the barriers between the academy and policy, ARC researchers have been uniquely positioned to engage with USAID. ARC researchers have partnered with a variety of USAID Missions to conduct rigorous impact evaluations of USAID-led interventions in Colombia, Niger, Georgia, Ghana, and Afghanistan, and these partnerships have helped mainstream evaluation procedures throughout USAID. The USAID Mission in Colombia, for example, wanted to understand the impact of a project focused on improving subnational governance in 40 conflict-affected municipalities to decrease violence and increase stability.

Given that AidData had already set up a contracting vehicle with USAID through the cooperative agreement, the Colombia Mission was able to buy into the award to obtain the necessary funding and work with AidData to identify and bring top researchers on board quickly. This meant that ARC researchers Dr. Mike Findley and Dr. Joseph Young were able to jump in and collaborate directly with the Mission to determine the scale and extent of the evaluation. (The evaluation is currently underway and is expected to wrap up in 2019).

Valuing ideas for evaluation

The ARC has also helped lower the costs for researchers to collaborate directly with those who make development policy. Traditional engagement by university scholars with development organizations often requires responding to large-scale solicitations that are time-consuming and challenging for individual researchers to be competitive in against larger D.C. contracting firms. By creating a network of world-class scholars who are easily accessible through an already-established partnership, AidData has both simplified the process for researchers to engage directly with USAID Missions and Operating Units, and increased the number of rigorous impact evaluations that are carried out at the agency.

As relationships between USAID and ARC researchers continue to grow, their skills and expertise will help provide the evidence base needed to better inform policy decisions and answer that crucial question: when does aid have impact? ARC researchers will further explore innovative methods for evaluating and understanding impact when they convene this fall to present new findings from October 19-20 in Washington, D.C.

Jessica Wells is a Research Scientist at AidData, where she co-leads the Gender Equity in Development initiative.