Gianni De Fraja, 23 August 2019

How would units of assessment submitted to the UK’s 2014 evaluation of scholarly research have fared if they had had been assessed using the bibliometric algorithm of the agency for evaluation of research in Italian universities? This column finds very high correlation between the two methods. In particular, the allocation of government funding to institutions that would have been obtained is essentially identical to that determined by the rules used by the REF2014.

Sebastian Galiani, Ramiro Gálvez, 10 June 2017

Researchers are evaluated using citation counts, often with a cut-off date. But this column shows that the lifecycle of citations differs between disciplines, with some subjects having earlier peaks or steeper declines in annual citations than others. These differences should be taken into account when evaluating researchers or institutions.

Charles Wyplosz, 17 February 2017

The IMF has just released its self-evaluation of its Greek lending, in which it admits to many mistakes. This column argues that the report misses one important error – reliance on the Debt Sustainability Analysis – but notes that the IMF’s candour should be a model for the other participants in the lending, namely, the European Commission and the ECB.

Victor Ginsburgh, 25 May 2012

Lead articles in academic journals tend to receive more citations than other articles. But does this mean they are any better? This column suggests that two-thirds of the additional citations that leading papers receive seem to be due to coming first in the journal, while only one-third are because they are genuinely better quality.

Events

CEPR Policy Research