VoxEU Column Frontiers of economic research

Exploration in science and ranking journals by novelty

Academics get ahead in part due to how often their papers are cited. This column argues that the pressure to publish research that garners a lot of citations stifles scientific progress by discouraging exploration. But in the absence of a plausible alternative for measuring the novelty of scientific publications, citation-based measures have persisted. This column presents a new way to rank scientific journals based on novelty as opposed to impact, which could encourage scientists to pursue more innovative work.

Scientists receive honours and promotions based on how influential their work is and the prestige of the journals in which they publish their work. Influence, in turn, is measured by citations to a scientist’s papers in other published work.

It has become surprisingly common, in every corner of science, to complain about the obsession with citation-based rankings. Even the editor-in-chief of the most highly cited scientific journal, Science, has warned that such metrics block innovation and lead to ‘me-too’ science (Alberts 2013).

The trouble with citations

What is the problem with rankings based on citations? For one, they do not depend at all on what kind of science is being pursued – they make no distinction between novel and conventional science. Though a highly cited paper might play with novel ideas, there is no intrinsic reason for it to do so.

While it is true that successful novel research yields more citations than successful conventional research, this difference is not enough to compensate for the risk associated with pursuing a genuinely new idea (Foster et al. 2015). Conventional science is much safer – it’s easier to get funded and published.

In the absence of an explicit incentive for exploration, too little novel science takes place (Besancenot and Vranceanu 2015). Moreover, given that people often enter science because they want to pursue the unknown, basing scientific recognition on citations can kill the intrinsic motivation to pursue science in the first place (Osterloh and Frey 2015).

The rise of citation-based metrics over the past three decades may already be changing how scientists work. Evidence from biomedicine shows that during this time, scientists have become less likely to pursue novel research paths (Foster et al. 2015). The consequence is that science progresses at a slower rate than it would if we did not care about citations so much.

A way out of the abyss

To address this problem, we recently developed a new journal ranking approach that rewards playing with new ideas, rather than influence (Packalen and Bhattacharya 2015). Journals are ranked based on their propensity to publish articles that build on ideas that are relatively new. Journals that publish articles that build only on well-established knowledge – no matter how influential – are not rewarded in our ranking.

We construct the new ranking based on the words and word sequences (e.g. polymerase chain reaction, microprocessor, randomized controlled trial) that appear in each scientific article. The rationale is that new words and new word sequences represent new ideas. We use a thesaurus to ensure that our results are robust to accounting for the fact that some new words are merely synonyms for older ideas.

For each journal, we calculate a neophilia index based on what fraction of articles published in it mention ideas that are relatively new. The vintage of each idea is determined based on the year in which the idea was first mentioned in the scientific literature. Journals that publish research that tries out these new ideas soon after their arrival receive a high neophilia index.

We implemented the new ranking for 126 journals in medicine. Table 1 shows the neophilia ranking for the ten best known journals in medicine. They are the best known because they are the ten most highly cited journals in medicine.1 A neophilia index of 1.50 for a journal means that articles published in it are 50% more likely to have built on a relatively new idea compared to the average published article in medicine.

Table 1 Neophilia rankings for ten highly cited journals among all 126 journals in general and internal medicine

The neophilia index is calculated based on articles published during 1980-2013. Red (blue) indicates a high (low) propensity publish innovative articles.

Even across these prestigious journals, there is considerable variation in the neophilia index. Some of them publish innovative papers much more than others. Moreover, for two of these prestigious journals the neophilia index is smaller than it is for articles published in less prestigious journals. Prestigious journals thus do not necessarily promote innovative science.

We also found that for most journals there is considerable persistence in the neophilia index over time. This suggests that the variation in the neophilia index reflects genuine editorial differences in regard to exploration rather than random factors.

Overall, we find a positive relationship between the neophilia index and citation ranking – on average, more cited journals publish more innovative work. But at the same time, some less cited journals publish a lot of innovative work and some highly cited journals shy away from innovative work. Clearly, the two ranking approaches capture different aspects of science.

More exploration in science

Our hope is that publishing the neophilia ranking for medicine and other fields will lead to more innovative science. It is hard to encourage a behaviour without first measuring it. The ranking provides a visible signal to the scientific community that a journal with a high ranking values innovation. As scientists long for the recognition of other scientists, the new ranking should make the decision to try out innovative but risky ideas easier.

Once scientists start paying attention to the new rankings, journals will do the same. A positive feedback loop encouraging innovative experimentation will result. Adoption of the neophilia ranking as part of tenure and promotion and granting decisions by university administrators and grant agencies will reinforce this positive feedback loop.

As with citation-based rankings, the novelty-based ranking can also have unintended consequences. For example, scientists and journals may be tempted to merely mention new ideas rather than actually incorporate them in their work. For most individuals and journals, the potential reputational costs should prevent this. Moreover, algorithms will be developed to detect such behaviour, as will new more robust versions of the ranking. These developments will mirror the proliferation of various citation-based indexes.

Everything is measurable, including your ingenuity

In the age of relentless quantification scientists can ill afford to hide behind the excuse that the ingenuity of their work cannot be measured. Novelty – like impact – can and should be quantified. Citation-based indexes too will continue to have their place; scientific impact is still important. Having a suite of indexes which capture different aspects of science should facilitate healthier science.

References

Alberts, B (2013), “Impact Factor Distortions”, Science 340: 787.

Besancenot, D and R Vranceanu (2015), “Fear of Novelty: A Model of Scientific Discovery with Strategic Uncertainty,” Economic Inquiry 53(2): 1132-9.

Foster, J G, A Rzhetsky and J A Evans (2015), “Tradition and Innovation in Scientists’ Research Strategies”, American Sociological Review 80(5): 875-908.

Osterloh, M and B S Frey (2015), “Ranking Games”, Evaluation Review 32: 102-29.

Packalen, M and J Bhattacharya (2015), “Neophilia Ranking of Scientific Journals”, NBER Working Paper No. 21579.

Endnotes

1 You can find the results for the full set of journals at www.nber.org/papers/w21579.

3,255 Reads