1430117218925.jpg
VoxEU Column Education

Making it count: Incentives, effort, and performance in higher education

Feedback has been found to improve exam performance in the context of higher education, but demand for feedback is low among students when obtaining it requires unrewarded effort. This column evaluates how the provision of extrinsic incentives affects students’ effort and performance. Having online learning assessments count towards final grades is found to trigger large participation increases, and better subsequent exam performance. Given the low cost of these interventions, they offer particular promise in higher education.

UK universities, like others in the world, operate in a demand-driven system, intensively competing to attract both home and overseas students. The UK government has recently introduced the Teaching Excellence Framework to ensure the delivery of high quality in higher education. It will also be the basis of tuition fee increases in the future.

A key factor measured in the Teaching Excellence Framework is student satisfaction with teaching quality, and feedback on their performance and progress. A simple tool to provide such feedback, especially in larger classes, is to provide online computer learning assessments in the form of quizzes. Such tools may also help reduce dropout rates in higher education (currently at 6% UK wide, and rising), and may help prevent students from failing courses. But do they work in practice? And does feedback alone increase students’ performance, or does it need to be coupled with incentives?

Bandiera et al. (2015) studied the role of feedback in higher education. They found that feedback does indeed improve students’ future exam performance. The mean impact corresponds to 13% of a standard deviation in test scores. The effect is stronger for more able students and for students who have less information about their performance in the new academic environment. In other words, students may be uncertain how their effort translates into test scores, and thus benefit from feedback. Overall, “the provision of feedback might be a cost-effective means to increase students' exam performance” (Bandiera et al. 2015).

In a new study, we explore this further (Chevalier et al. 2017). We offer voluntary feedback via online quizzes to students of a college of the University of London. In spite of the strong preference for feedback that students express, for example in the National Student Survey, in practice students’ demand for feedback is low when obtaining it requires unrewarded effort. Only 29% of the students participate in online assessments that provide feedback if participation is voluntary.

If feedback opportunities are not taken by students without further reward, it may not achieve the substantial performance increase described above. Therefore, we evaluate how the provision of extrinsic incentives affects students’ effort – for example, through participation in online computer learning assessments (quizzes). First-year students were provided with weekly quizzes that tested their understanding of all the material covered in their microeconomics and macroeconomics courses. The quizzes are designed to foster continuous learning so that quiz participation may help increase students’ learning and performance. While the quiz participation rate in non-incentivised weeks is only 29%, simply labelling them compulsory in some designated weeks led to a jump in participation by 69 percentage points. Clearly, formative assessment and the possibility of gaining feedback is not enough to motivate students to engage students in providing additional effort.

In the following cohort, we changed the rules such that some quizzes counted towards the student’s final marks. We found that even small assessment weights suffice to induce extra effort – weights of only 2.5% and 5% of the overall course grade trigger large participation increases of 41 and 62 percentage points, respectively. Small assessment weights can thus trigger large effort responses. Compulsion is even more effective but might be more costly to scale up, due to administrative costs associated with checking the validity for excuses of non-response.

Effort responses to incentives may differ by student ability (as measured via university entry grades), for example due to differences in intrinsic motivation. Our results confirm Bandiera et al.’s (2015) results that high-ability students display high participation rates in quizzes even in the absence of assessment weighting. This leads us to our third finding – assessment weighting is particularly effective in increasing effort among low- and median-ability students without having any detrimental effects on effort among high-ability students. In short, students react to incentives and provide more effort when effort is rewarded by exam points.

Recent criticisms of the use of incentives have warned against the danger of crowding out intrinsic motivation. Students may either not respond to them at all or they may simply shift effort towards activities that are assessment weighted rather than increase effort overall. Effort is notoriously difficult to measure as it manifests itself in many forms, such as the level of endeavour in quizzes, lectures, seminars, self-study time, and further varies in time and intensity. We measure some of these, and all results point in the same direction – students neither displace quiz effort between incentivised and non-incentivised weeks, and nor do we find evidence of effort displacement inter-temporally or across subjects. In contrast, we find indication of positive spillovers on effort between weeks, performance in other courses in the same year, and marginally, on a related course a year later.

Even if assessment weighting helps increase students’ effort, is this effort productive in terms of learning and achievement? The study relies here on the difference in the conditions faced by the two cohorts. Using statistical techniques to make the two cohorts as comparable as possible, we conclude that the cohort affected by assessed quizzes performs better at final exams – three assessed quizzes with a total weight of 10% of the final grade not only increased quiz participation, but also grades in in-term tests by 0.27 of a standard deviation; this amounts to around 3 percentage points in standard student exams. This is a very large effect, comparable in magnitude to the effect of large financial rewards for increased or high grades summarised in Lavecchia et al. (2016). However, assessment weighting is implementable at a much lower cost and may thus be easier to scale up considering the limited resources of higher education institutions.

The lessons from this research suggest that continuous student assessment via computer-aided quizzes which assess a student’s learning each week have a significant impact only if the quizzes are made to count for the student’s assessment. When such assessments are made summative it induces student effort and increases exam scores effectively.

Finally, the differential effort response of students of high and lower ability results in a reduction of the grade gap by 17%. It seems that assessment weighting helps to ‘level the playing field’.

Altogether, by providing an effective set of incentives, universities could increase the fraction of students gaining feedback. This will likely improve their scores in the Teaching Excellence Framework, the National Student Satisfaction Survey, their position in the National League Tables and, most importantly, improve the exam scores of students.

References

Bandiera, O, V Larcinese and I Rasul (2015), “Blissful ignorance? Evidence from a natural experiment on the effect of individual feedback on Performance”, Labour Economics 34: 13-25.

Chevalier, A, P Dolton and M Lührmann (2017), “Making it count: Incentives, student effort and performance”, Journal of The Royal Statistical Association Series A.

Lavecchia, A M, H Liu and P Oreopoulos (2016), “Behavioral economics of education: Progress and possibilities”, in E A Hanushek, S Machin and L Woessmann (eds), Handbook of Economics of Education 5: 1-74.

Neves, J and N Hillman (2016), The 2016 Student Academic Experience Survey, Higher Education Policy Institute.

2,205 Reads