barbosu30nov.jpg
VoxEU Column Frontiers of economic research Industrial organisation

Storm crowds: Evidence from Zooniverse on crowd contribution design

The design of the contribution mechanism for volunteers may affect the sustainability of crowdsourcing platforms. This column investigates how a change to the Zooniverse platform which meant that volunteers could no longer submit incomplete entries affected the number and quality of contributions. The results suggest that the total number of completed tasks by anonymous volunteers increased. The change in contribution design successfully encouraged them to channel their efforts into delivering fewer, higher-quality contributions, as intended.

Mass internet adoption means that crowdsourcing – the act of taking a task once done by an employee and outsourcing it to a large, undefined group of non-experts (Howe 2006) – has proliferated in recent years. The archetypal example of crowdsourcing is Wikipedia, in which volunteers work together to write encyclopaedia entries. Crowdsourcing is prevalent in many other domains too, including business, government, and science. Companies like Dell, Lego and Starbucks have used crowdsourcing among their customers to generate new product ideas. Government initiatives – such as Better Reykjavik, Future Melbourne and the Cairo Transport App Challenge – also employ crowdsourcing among residents to improve city planning and traffic congestion. More recently, crowdsourcing has also been used in science, enabling non-scientists to participate in real scientific research. Zooniverse, the world’s largest crowdsourced science platform, has more than 1 million volunteers who perform data analysis to assist scientists.

Why is crowdsourcing sustainable?

Given the recent proliferation of crowdsourcing, we should ask what motivates volunteers to participate, and why it is sustainable. Several studies have found that volunteers have prosocial motives (Rashid et al. 2006, Schroer and Hertel 2009, Peddibhotla and Subramani 2007). This suggests that volunteers contribute because they want their efforts to benefit others.

One potentially important factor that has been overlooked by existing research is how the contribution design of crowdsourcing platforms matters for their sustainability – specifically, how contribution design affects the level and quality of volunteer contributions. For example, consider the difference in contribution design between Wikipedia and the Encyclopaedia Britannica. On Wikipedia, contributors can do anything from fixing typos to providing a complete article (the term ‘wiki’ refers precisely to this feature of the site). By contrast, entries to the Encyclopaedia Britannica were typically sourced with little or no monetary reward from experts, but those experts were required to provide a complete article (Greenstein and Zhu 2017). This motivated us to study whether contributing tasks are bundled together or carried out selectively is a factor in the level and quality of contributions in crowdsourcing.

Zooniverse

As the world’s largest crowdsourced science platform, Zooniverse is a great place to examine this question. Launched in 2009, Zooniverse has more than 1 million volunteers working on 59 projects, across 11 scientific disciplines, spanning the humanities, natural and social sciences. Volunteers, known as 'Zooites', contribute by analysing images and performing data processing tasks that prepare data for scientific research.

In the Zooniverse Cyclone Center, volunteers analyse satellite images of storms to help climatologists understand and better predict storm behaviour. In June 2013, nine months after launch, the Cyclone Center switched from the amount volunteers contributed on any given entry being optional to mandating that only complete entries would count. The idea was to increase the number and quality of complete entries that were submitted. This change allowed us to consider the divisibility of contributions as a key element of design.

Prior to the 2013 change, analysis consisted of two tasks:

  • Task 1: A required section of three questions, which a volunteer had to answer to submit a contribution
  • Task 2: Additional optional questions that a volunteer could choose whether or not to complete.

After the format change, Task 2 also became required. Thus, the format change effectively decreased contribution divisibility by bundling together two contribution tasks that previously could be performed selectively. This change allowed us to explore the effect of contribution divisibility on contribution levels, and on quality.

What does theory predict would happen?

The first prediction is that by making contributions less divisible, the total number of analyses that volunteers contributed would go down. This is easy to understand with normal economic goods – make something cost more to do, and people do less of it.

For a public good, things are a little more complex. If people are pro-social in their motivations, this might not change their behaviour at all. But we knew some people were not completing entries, so the question was: Why not?

Zooniverse hoped that the number of complete edits would go up – that is, when forced to complete an entry to count as a contribution, more volunteers would fall into line. Completion was personally costly, and this may have been why volunteers didn’t do it before. If it were made too costly, perhaps the total number of complete edits would fall. After all, volunteers didn't know how costly the second task would be until they flipped the page. So perhaps people wouldn’t bother to start a task if they didn’t know what they were volunteering for.

Empirical evidence

The quasi-experiment, plus the fact that the Cyclone Center was just one project on the Zooniverse platform, allowed us to use a difference-in-differences identification strategy. We compared the average change in contribution levels before and after the format change in Cyclone Center to the average change that was predicted if the format had stayed the same. We captured the relevant counterfactual by observing contribution levels in Galaxy Zoo, a Zooniverse project similar to Cyclone Center that did not change format.

After the Cyclone Center format change created lower contribution divisibility, the number of total edits per volunteer was significantly lower than predicted levels had there been no change. We also found that, for complete edits, registered and anonymous volunteers responded differently. Registered volunteers did not significantly change the number of complete edits they contributed post-change, while anonymous volunteers contributed significantly more of them. This change improved the quality of complete edits in Cyclone Center, because anonymous volunteers performed better than registered volunteers in answering storm-related questions.

The differences in reactions from registered and anonymous editors may mean that this is a proxy for pro-social motivations. Recall that if a volunteer was motivated by pro-social factors, then this change may be not have meant that much to them, as they were contributing a high amount ex ante. If they were not so motivated, then cost may have been a factor. So, unregistered volunteers may have reallocated their efforts from lots of incomplete analyses to fewer complete ones, with the end result being what the designers wanted.

Strategic implications

Our research has important strategic implications for crowdsourcing platforms, because it suggests that the choice of contribution design – a feature that is under the direct control of the platform – matters for the level and quality of volunteer contributions.

Specifically, the extent to which contributing tasks are bundled together or can be carried out selectively is an important factor. When deciding on contribution design, crowdsourcing platforms should therefore consider three questions: What constitutes a useful contribution for the platform? What type of knowledge is required to make useful contributions? And what is the skill level of the volunteers?

References

Greenstein, S and F Zhu (2017), “Do Experts or Crowd-Based Models Produce More Bias? Evidence from Encyclopedia Britannica and Wikipedia”, Forthcoming, Management Information Systems Quarterly.

Howe, J (2006), “The rise of crowdsourcing,” Wired, 1 June.

Peddibhotla, N B and M R Subramani (2007), “Contributing to public document repositories: A critical mass theory perspective,” Organization Studies 28(3): 327-346.

Rashid, A M, K Ling, R D Tassone, P Resnick, R Kraut, and J Riedl (2006), “Motivating participation by displaying the value of contribution,” Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI 2006, Montreal, Canada: 955-958.

Schroer, J, and G Hertel (2009), “Voluntary Engagement in an Open Web-Based Encyclopedia: Wikipedians and Why They Do It,” Media Psychology 12: 96-120.

840 Reads