VoxEU Column Development Frontiers of economic research International trade

Impact evaluation in trade: Time for a cultural revolution?

With finances getter ever tighter in developed countries, policymakers are starting to ask whether giving money to developing countries can still be justified. Taking the example of Aid for Trade, billions have been spent with little robust evidence of its effectiveness. This column argues that trade policy needs to learn from other development work and start with rigorous impact evaluations – otherwise the best programmes could easily get cut.

If judged by the money mobilised, the Aid for Trade initiative – which aims to help low-income countries integrate into the global economy – is already a success, with funding up to $40 billion by 2010. If judged by broad indicators such as the share of low-income countries in world exports – up, albeit modestly, from 0.77% in 2000 to 1.04% in 20101 – it could also be seen as a success. There is a problem, though. It is not entirely clear that aid has caused these trade outcomes.

There were times when donors were content merely to see aid disbursed or to see any association between aid and favourable outcomes – perhaps because development aid was seen as a sort of moral obligation. But winds are shifting. A July 2010 Harris poll featured by the Financial Times showed budget deficits to be a prime concern for respondents in most OECD donor countries; moreover, development aid stood out as the number-one candidate for cuts. With rising concerns about jobs and competitiveness, Aid for Trade might well become an increasingly hard sell to donors.

A strong emphasis on results and accountability could help reconcile the continued need for trade-related assistance with growing budgetary pressures in donor countries. Yet so far the development community has struggled to respond to these demands, and there is surprisingly little evidence about what works and what doesn’t in the area of trade and industrial policies. For instance, out of the 85 World Bank trade-related interventions that started between 1995 and 2005, only five were evaluated rigorously using a control group as a benchmark. Most evaluations were qualitative – interviews with focus groups and satisfaction questionnaires – and even those involving some sort of quantitative exercise were typically based on simple before-after comparisons, known to be vulnerable to many confounding influences.

Can we do better?

Can trade be evaluated more like development economists are doing (eg Banerjee and Duflo 2009)? Impact-evaluation methods have proved to be powerful tools to help guide policy choices in other areas of development work such as health and education. The usual excuse for not using impact evaluation methods in assessing the effectiveness of trade assistance is that the ‘clinical’ nature of the treatment needed for a proper definition of treatment and control groups is absent from trade policy. But the nature of trade interventions is shifting from economy-wide tariff reforms to focused interventions – technical assistance, export promotion – that are targeted directly at firms. Those interventions have the clinical dimension needed to construct credible control groups which is the essence of impact evaluation.

Government administrations do have difficulty with randomised-control trials, but numerous flexibilities exist to make it more palatable. For instance, lotteries can be organised among eligible firms. Alternatively, randomisation can be over programme promotion – using ‘encouragement design’ – rather than over eligibility. But most importantly, one should not think of impact evaluations exclusively in terms of randomised-control trials. Some ‘quasi-experimental’ methods rely on clever econometrics ex post. They include, among other things, difference-in-differences regression (the crudest), propensity-score matching (which compares treated entities with the most similar non-treated ones), or regression-discontinuity design (which compares entities in the neighbourhood of an eligibility cut-off).

In a new volume, we present examples of impact evaluations of trade-related assistance using a range of methods (experimental and non-experimental) highlighting the challenges arising in a trade context and the lessons already being learned (Cadot et al 2011). These methods have been applied in a number of recent studies and have produced interesting and unexpected results. For example in an ex post evaluation of export promotion programmes in six Latin American countries using rich firm-level datasets, Christian Volpe (2011) shows that these programmes were effective in facilitating export expansion primarily along the extensive margin (through an increase in the number of products exported or in the number of export markets served) rather than along the intensive margin (an increase in exports of existing products to existing markets). He also shows that programmes benefited small and relatively inexperienced firms more than larger and already established exporters, and that bundled services providing support to firms throughout the export development process were more effective than isolated actions.

Another example in the volume is Mohini Datt and Dean Yang’s (2011) evaluation of efforts by the government of the Philippines to use preshipment inspection services to combat corruption in customs and to increase import duty collections. They find that when inspections were expanded to lower-valued shipments, imports shipments were no longer misvalued, but those from affected countries shifted differentially to an alternative duty-avoidance method – shipping via duty-exempt export processing zones. Thus, increased enforcement reduced the targeted method of duty avoidance, but led to substantial displacement to an alternative duty-avoidance method. Duty collection failed to rise, while importers incurred higher fixed costs as they relocated to export processing zones. This evidence shows that, to be successful, anticorruption reforms need to encompass a wide range of possible alternative methods of committing illegal activity.

We cannot deny that cost and incentive issues do pose a challenge to conducting impact evaluations of trade projects, in particular because they are linked. As they stand, the incentive structures facing project managers are rarely conducive to diverting money from operational work to carry out cumbersome evaluation exercises with more downside than upside potential for career advancement, and most benefits of learning accruing to future projects. The answer is that impact evaluations should be viewed not as yet another monitoring mechanism to look over the shoulder of project managers, but rather as a way of using development work to generate quasi-experimental settings, so as to learn from experience – what Ravallion (2008) called “evaluative research” and former Chinese Premier Deng Xiaoping “feeling our way across the river”. Most of the learning in trade economics still comes from ‘natural experiments’, ie fortuitous circumstances where a shock or policy intervention unintentionally generates identification.2 But natural experiments are few and far between. Much more could be learned, and much faster, if trade interventions were systematically designed to generate knowledge.3 Ideally, donors would increasingly make some form of evaluation mandatory (to overcome incentive problems), earmark funds for evaluations conditional on a standard of rigour, and help to create a pool of expertise to carry them out.

All in all, and notwithstanding their limitations, there is plenty of scope for and benefits from applying rigorous impact evaluation methods to trade-related interventions. The debates around the Aid for Trade Global Review last July (OECD/WTO 2011) indicated growing pressure to show results. If the Aid for Trade community fails to show results through scientifically credible methods, accountability may end up being forced on it in the form of more bureaucratic controls and strings, from which nothing would be learned. Besides, evaluation is not just a matter of reassuring donors. In environments where policy capture is widespread, it is crucial to expose wasteful and ineffective programmes. Also, evaluation is an area where donor agencies can contribute to capacity-building and facilitate the sharing of experiences, as some governments, in Latin America and elsewhere, are clearly ahead of the pack. These efforts could spark a sorely needed revolution in the culture of evaluation among the trade, development, and research communities.

Authors’ note: This note is based on a book edited by Cadot et al (2011), Where to spend the next million: Impact evaluation of trade-related interventions London: CEPR and World Bank.

References

Ashenfelter, Orley, and D Card (1985), “Using the longitudinal structure of earnings to estimate the effect of training programs”, Review of Economics and Statistics 67:648-660.

Banerjee, Abhijit, and E Duflo (2009), “The Experimental Approach to Development Economics”, Annual Review of Economics 1:151-178.

Banerjee, Abhijit, and E Duflo (2011), Poor Economics: A radical rethinking of the way to fight poverty, New York: Public Affairs.

Bown, Chad, and G Porto (2008), “The WTO, Developing Country Exports, and Market Access: Firm-Level Evidence from an Unlikely Trade Preference Shock”; mimeo.

Cadot, Olivier, Ana M Fernandes, Julien Gourdon, and Aaditya Mattoo, eds.  (2011), Where to Spend the Next Million? Applying Impact Evaluation to Trade Interventions, London: CEPR and World Bank.

Datt, Mohini and Dean Yang (2011) “Half-Baked Interventions: Staggered Pre-Shipment Inspecitions in the Philippines and Colombia”, chapter 6 in in Cadot et al, eds. (2011)

Lalonde, Robert, and R Maynard (1987), “How precise are evaluations of employment and training programs: Evidence from a field experiment”, Evaluation Review II, 428-451.

OECD/WTO (2011), Aid for Trade at a Glance: Showing Results,Report for the Third WTO Global Review of Aid for Trade, OECD.

Ravallion, Martin (2008), “Evaluation in the Practice of Development”; The World Bank, Thematic Group on Poverty, Monitoring and Impact Evaluation, Doing Impact Evaluation #11.

Ravallion, Martin and D van de Walle (2008), Land in Transition: Reform and Poverty in Rural Vietnam, Basingstoke: Palgrave Macmillan.

Volpe, Christian (2011), “Assessing the Impacts of Trade Promotion Interventions: Where Do We Stand?”, chapter 2 in Cadot et al eds. (2011).

 


1 Source: UN-COMTRADE, mirrored LDC exports, total using SITC rev.1 nomenclature.

2 For recent examples, see eg Bown and Porto (2008).

3 A recent book by Banerjee and Duflo (2011) describes in detail how the accumulation of results from experiments can generate lessons on broad development issues.

 

840 Reads