VoxEU Column Development

Why political short-sightedness and randomised control trials can be a deadly mix for aid effectiveness

The recent focus on impact evaluation within development economics has led to increased pressure on aid agencies to provide evidence from randomised controlled trials. This column argues this reinforces a political bias towards immediately verifiable and media-packaged results at the expense of more long-term and complex processes such as institutional development.

The recent focus on impact evaluation within development economics has led to increased pressure on aid agencies to provide evidence from randomised controlled trials (RCTs). This is in many ways a good thing; RCTs can help our understanding of which interventions work and introduce stronger incentives for aid agencies to clearly motivate their allocation of resources. However, the focus on precise measurement means that RCTs typically only measure part of the relevant effects of an intervention. This can reinforce an existing political bias towards immediately verifiable and media-packaged results, at the expense of more long-term and complex processes of recipient country learning and institutional development. A methodological advancement that would bring unambiguous benefits in a first-best world may thus carry some unexpected drawbacks if required in a politically biased reality.

Why aid policy fails: Two conflicting narratives

At the risk of oversimplification, it is possible to identify two influential narratives of why aid has not been more effective. According to the first narrative, a main problem with current aid policy in most Western donor countries is that aid agencies are reluctant to change, overly bureaucratic and slow to adopt their strategies in response to new developments. Agencies are accused of institutional inertia; programs and projects keep getting financed despite doubts about their effectiveness. This inertia is made possible by a lack of accountability and transparency towards tax payers as well as the ultimate beneficiaries.

Part of these problems may be inherent to the complexity of aid as such, but it also reflects how aid is organised, and how aid agencies operate. Both Easterly and Pfutze (2008) and Birdsall and Kharas (2010) report a lack of response from most aid agencies when asked for information pertaining to different aspects of how they spend their money, and only ten out of 31 agencies pass Easterly and Pfutze's 'transparency test'. This reluctance is interpreted as a sign of institutional laziness or a concern that revealing the true impact of aid would decrease its support. In this narrative, aid effectiveness would benefit from stronger incentives for aid agencies to generate verifiable outcomes, and politicians and taxpayers should be given better instruments to hold agencies accountable.

In the second narrative, the main problem with aid policy is impatience with institution building (e.g. Birdsall 2004). Development is seen as a long-term process of creating and sustaining strong economic and political institutions, and the learning, cultural immersion and changing norms that come with aid are deemed crucial even though impact is hard to evaluate. In this context, local participation and ownership is necessary for the buildup of experience, know-how, and a sense of responsibility. Unfortunately, ownership and learning can run counter to what maximises the chances of a successful project in the short run, as indirectly suggested by the persistence of donors to create so-called project implementation units to avoid using recipient-country management structures (Birdsall and Kharas 2010).

So where does this impatience with institution building come from? The typical explanation is that it comes from a political need to show immediate results. This causes a bias towards actions that first of all generate direct results, and secondly can be relatively easily evaluated. This creates the risk of bias towards project rather than program aid, and evaluations tend to focus on a narrow set of easily quantifiable outcomes. Knack and Rahman (2007) argue that aid agencies, in order to satisfy domestic constituencies in parliament, are pushed into "making the results of aid programs visible, quantifiable, and directly attributable to the donor's activities – even when doing so reduces the developmental impact of aid." In this narrative, a solution to the problem entails a delegation of authority to independent aid agencies with a more long-term perspective on development.

The role of randomised control trials

Proponents of the first narrative point to RCTs as a potential remedy. RCTs offer an improved methodology for evaluating the effectiveness of aid-financed projects, thereby helping governments and taxpayers to hold aid agencies accountable. Some proponents of this approach have gone as far as to argue that aid money should be channelled exclusively to projects that have been shown to have an effect through RCTs, dismissing alternative evaluation methods as lacking internal validity (Banerjee 2007). As discussed in, for instance, Ravallion (2009), however, the RCT methodology also has some weaknesses. Of particular relevance for this note are concerns about the impact on development policy if the possibility of RCTs becomes a requirement for financing.

First of all, not all interventions governments implement to fight poverty can be randomised. Macroeconomic policies, infrastructure projects, public sector reform and institutional development typically belong to this group of programs. Generally, the bigger and more complex the question, the harder it is to design an RCT to evaluate it. In the end the methodology may end up determining which projects to implement, rather than an analysis of biggest needs or most urgent bottlenecks. Secondly, RCTs typically focus on only a subset of outcomes: those that are most easily observed and quantified. These outcomes may not be the most important ones, causing RCTs to generate precise answers, but not to the most relevant questions. Components of learning and institutional development risk becoming externalities if they cannot be quantified and packaged in the same way as, say, the number of children being vaccinated, the number of malaria bed nets distributed, or test scores in elementary schools. Excessive pressure to generate impressive immediate outcomes is a valid concern if it comes at the cost of learning and institutional development. This is not a critique of randomisation as such, but it highlights that in a political environment in which immediate outcomes are already favoured, requirements on RCTs may risk distorting how projects and programmes are implemented.

In Olofsgård (2012) I analyse in a multiple-task principal-agent model the potential consequences of the introduction of a new and improved methodology for evaluating part of the effects of an aid-financed intervention (such as RCT). The model assumes resistance to change, so the government needs to provide incentives to motivate the agency to accommodate its project or programme portfolio in response to new evidence on what works and what doesn’t. The benefit of better evaluations is that the government can put stronger incentives on appropriate project selection. However, the problem with too strong incentives is that better immediately observable outcomes can also be achieved by reallocating resources within the project (financial and human) to those ends, on behalf of outcomes that are long-term and harder to evaluate, such as learning and institutional development. A government that internalises these long-term effects will therefore strengthen incentives in response to the improved methodology, but not so much that it reduces the development impact of the project by motivating a sharp reduction in learning and institutional development. On the other hand, if the government only cares about immediately observable outcomes then the new methodology will motivate a more substantial increase in incentives, since the negative long-term consequences are not internalised. It follows that the incentive contract does not optimally balance the tradeoff between improving project selection and the allocation of resources within projects. If the negative effect on the latter dominates, then the improved methodology, in connection with short-sighted politics, may even reduce the effectiveness of aid.

Conclusions

What my simple model illustrates is that in a second-best world, where politicians are too impatient or under too much political pressure to generate immediately verifiable results, the introduction of a technology that can more accurately evaluate part of the effects of aid projects may actually be abused and end up being counterproductive. An innovation that would normally improve effectiveness of aid instead reinforces an existing pattern of misallocation of resources. A first-best solution to the problem would, of course, be to have patient politicians and to make use of the better evaluation technology. If this is not realistic then one needs to be cautious in arguing for the exclusive use of RCTs, before learning and institutional development can be measured with a similar accuracy.

References

Banerjee, A. (2007), Making aid work, The MIT Press, Cambridge, Massachusetts.

Birdsall, N. (2004), "Seven Deadly Sins: Reflections on Donor Failings", CGD Working Paper 50.

Birdsall, N. and H. Kharas (2010), "Quality of Official Development Assistance Assessment", CGD, Washington DC.

Easterly, W. and T. Pfutze (2008), "Where does the Money go? Best and Worst Practices in Foreign Aid", Journal of Economic Perspectives 22, 29-52.

Knack, S. and A. Rahman (2007), "Donor Fragmentation and Bureaucratic Quality in Aid Recipient Countries", Journal of Development Economics 83, 176-197.

Ravallion, M. (2009), "Should the Randomistas Rule?", Economists’ Voice 6.

Olofsgård, A. (2012), “The Politics of Aid Effectiveness: Why Better Tools can Make for Worse Outcomes”, Working Paper.

1,575 Reads