Paying universities to lower their standards

Manuel Bagues, Natalia Zinovyeva, Mauro Sylos Labini 10 September 2008



In March 2000 the European Council set in Lisbon the goal of making the EU the most competitive and dynamic knowledge-based economy in the world by 2010. Two years before that deadline, this goal seems as unattainable as ever for most European countries. This is most visible in the university sector. Indeed, with the notable exception of the UK, European universities display a poor performance in most international education rankings. According to both the Times Higher Education Supplement and the Shanghai Jiao Tong university rankings, only four institutions in continental Europe would rank among the top 50 universities in the world. The Italian universities are probably among the most worrying cases: graduates experience long non-employment spells after graduation and earn relatively low wages compared with their European equivalents.

To cope with the above challenges, several European countries, including Italy, have recently reformed their university systems even though without giving up the governments’ (quasi) monopoly. In general, the reforms have given universities more autonomy and more powerful incentives. These incentives have often been implemented through “input funding” contingent on the number of students and “output funding” based on the number of diplomas granted (Jacobs & Van der Ploeg, 2005). As noted by these authors, some of these rules might undermine educational quality, as in most cases the quantity rather than the quality is rewarded due to the difficulties in measuring the later. Universities might potentially respond by lowering their standards as long as the reputation loss is not too severe. Yet, as Mas-Colell (2003) points out, “Europe has not yet developed muscular reputation effects. We are still, on the whole, dominated by a generic culture of credentialisation where what is important is to have a credential to exercise […] and it is much less significant who the issuer of his credential is.”

Our analysis of the Italian case confirms this concern (Bagues, Sylos Labini and Zinovyeva, 2008). We show that by funding universities according to the number of students that pass their exams, the Italian government is favouring those universities that add less value. It simply funds more those universities with lower standards.

The Italian University Funding System

Before 1993, the Italian Ministry of Education allocated funds largely on a historical basis. In 1993, each university became an autonomous entity with its own budget. Moreover, a set of rules was introduced for allocating funds across universities, with about 90 per cent to be assigned on a historical basis and the rest via an “equalisation component”. This “equalisation component” was supposed to progressively replace the historical part. In addition to reduce public funding disparities across universities and across disciplines, the system aimed to promote quality. In particular, teaching quality was supposed to be rewarded by linking funding to the number of exams passed by students. More precisely, the funds allocated to a university increase with the total number of full-time equivalent students (FTE), which is defined as the ratio between the number of exams passed and the number of exams that students should have taken.

Are high-grading universities producing better graduates?

Measuring universities’ quality based on the number of exams that students pass presumes that grading standards are similar across institutions. However, in the absence of quality assurance mechanisms, the number of FTE students might be driven by how easy it is to pass an exam at a given institution instead of by students’ true quality. We test this hypothesis using data from several editions of a comprehensive triennial survey with information on a representative sample of Italian university graduates. In particular, we study (a) whether grading standards vary across different universities and departments, and (b) whether graduates from universities performing better according to the number of FTE students also perform better in the labour market and in the external professional exams required by a number of professional occupations.

We observe that there exists a significant negative correlation between departments’ average grades and the labour market outcomes of their graduates, i.e. graduating from a high-grading university leads to a higher unemployment probability and to lower job satisfaction. This negative relationship is robust to an extensive set of university and individual controls that include socio-economic background, grade obtained in high school, province of origin and the duration of studies. Furthermore, we find that graduates from departments with high average grades perform worse in external professional qualification exams. These results suggest that the observed differences in grades across institutions reflect to a large extent differences in grading standards. Consistently, we also find that graduates from universities with a high relative number of FTE students tend to do significantly worse in the labour market.

Policy conclusions

Our findings raise concerns on the effectiveness of the funding mechanism. The evidence suggests that a financing scheme that was meant to reward universities that produce higher value added is, instead, favouring universities with lower standards. Policy makers should be very cautious about using students’ academic performance as a proxy for university value added when the universities are the ones measuring performance. In light of this evidence, quality ensuring mechanisms—e.g., a system based on external examiners, as in the UK—is a necessary complement to any output funding scheme of this type. And given that obtaining objective evaluations from external examiners might be itself problematic and costly, it is necessary to foster reputation effects in the market for higher education. For instance, by publicising data about how graduates of different universities and disciplines perform in the labour market.


Bagues, M., Sylos Labini, Mauro and Natalia Zinovyeva (2008), “Differential Grading Standards and University Funding: Evidence from Italy”, CESifo Economic Studies 54 (2), 149-176.

Jacobs, B. and F. Van der Ploeg (2006), ‘‘Guide to Reform of Higher Education: A European Perspective’’, Economic Policy 21(47), 535–92.

Mas-Colell, A. (2003), ‘‘The European Space of Higher Education: Incentive and Governance Issues’’, Rivista di Politica Economica 93, part 11/12, 9–27.

Perotti, R. (2002), ‘‘The Italian University System: Rules vs. Incentives’’, Paper presented at the first conference on Monitoring Italy, ISAE, Rome, January.



Topics:  Education

Tags:  Italy, European universities, international rankings

Associate Professor at Aalto University, Helsinki, Finland

Research Fellow, Foundation for Applied Economics Research

Assistant Professor, University of Pisa