VoxEU Column Environment Frontiers of economic research

Expert fiddling

Recent allegations that scientists at the Climate Research Unit have hidden and manipulated data has caused a media storm. This column argues that the practices alleged in “climategate” may be more common in academia than we think.

Are academics telling porkies? Are drugs really less dangerous than horse-riding? Are Himalayan glaciers really melting? Politicians are beginning to wonder – which can do little for their faith in evidence-based policy.

Of course, academics never could give straight answers to such questions. Modulation is inherent in any answer and is amplified by transfer to the political arena, which should hardly shock politicians.

And yet, politicians seem genuinely horrified by what the server at the University of East Anglia has disgorged. Shortly before the Copenhagen climate change summit last November, the university’s emails were hacked, apparently revealing that scientists had hidden and manipulated data.

Can it really be that academic experts fiddle the peer review and publication system, the very system that certifies their expertise?

Publishing and performance

Academic expertise is confirmed by publication in top journals, publication approved by peer review. ‘Twas ever thus. But the UK Research Assessment Exercise expects this system to serve another function as well. It is to provide the primary performance indicator by which academics, their departments and their universities, are judged – and funded.

Wherever there are indicators of performance, there is incentive to produce the indicator rather than the performance. Gaming is much more efficient. My work with Jacqueline Kam (not all of it appearing in the best journals) suggests that many papers published in top journals are written to be counted rather than read.

“Rubbish!” some say (referring to our work rather than the contents of top journals). Competition for a slot in these journals enables peer review to pick the truly excellent.

Not so. So great is competition that rejection rates usually top 90% (99% for the Harvard Business Review). Peer review cannot cope, so most papers are rejected without ever seeing a referee. The survivors are often papers from old hands – the authors of about a third of papers published by the Harvard Business Review have connections with Harvard University.

This still leaves lots of papers to be refereed by an academy disinclined to undertake anonymous work. The willing are over-exploited, and editors resort increasingly to journal alumni, their loyalties to the journal rather than to the invisible college. They are seen as part of the editorial team, gatekeepers blackballing what does not fit, screening for what is wrong with a paper rather than what is right.

When upsetting a single referee means certain rejection, authors play safe and deliver what is most likely to please. At revision stage, many change what they know to be right to satisfy a referee.

As surely as a top journal identifies academic expertise, high impact factor – a calculation of how often a journal’s papers are cited – identifies a top journal. This, too, is manipulated. Editors aim to publish only the most citable papers, which are the most vague and sweeping papers, papers that can support almost any statement. The more they are cited, the more they will be cited as authors struggle to fit with the literature.

Least citable are research papers, anything multi-disciplinary, innovative, controversial – or negative. It does not pay to disagree. Despite the pretence that the most cited papers are the best papers, citation is tactical these days. Not so long ago, “most cited” meant bog ordinary.

Our latest research reveals citability leading to clubability. The same few authors dominate top journals. The advantage of the chosen few is that they tweak the impact factor in the right direction whenever they cite their own work. So, selflessly and shamelessly, they self-cite, co-cite and group-cite. For instance, the Journal of Marketing Research, with 60% internal citation by its few, shows almost no interest in anything that is not published in the Journal of Marketing Research. The Administrative Science Quarterly is as inward-looking as the house magazine of some large corporation.

Academics no longer look to this distorted system to identify experts. But politicians still do, blithe in their assumption that academic integrity withstands whatever is heaped upon it.

British politicians squabble about whether the new system of measuring and rewarding academic performance, the Research Excellence Framework, should emphasise citation metrics or academic impact. Neither will produce honest advice from academic experts. Let politicians consider what wonders the newspaper coverage of recent weeks will have done for the Research Excellence Framework impact of the University of East Anglia.

References

Macdonald, Stuart and Jacqueline Kam (2007a), “Ring a ring o' roses: Quality journals and gamesmanship in Management Studies”, Journal of Management Studies, 44(4): 640-55.
Macdonald, Stuart and Jacqueline Kam (2007b), “Aardvark et al. … : Quality journals and gamesmanship in Management Studies”, Journal of Information Science, 33(6): 702-717.
Macdonald, Stuart and Jacqueline Kam (2008), “Quality journals and gamesmanship in Management Studies”, Management Research News, 31(8): 595- 606.
Macdonald, Stuart and Jacqueline Kam (2009), “Publishing in top journals – a never-ending fad”, Scandinavian Journal of Management, 25(2):221-224.