Gamers, citizen scientists, and data: Exploring participant contributions in two games with a purpose

Nathan Prestopnik, Kevin Crowston, Jun Wang

Research output: Research - peer-reviewArticle

Abstract

Two key problems for crowd-sourcing systems are motivating contributions from participants and ensuring the quality of these contributions. Games have been suggested as a motivational approach to encourage contribution, but attracting participation through game play rather than intrinsic interest raises concerns about the quality of the contributions provided. These concerns are particularly important in the context of citizen science projects, when the contributions are data to be used for scientific research. To assess the validity of concerns about the effects of gaming on data quality, we compare the quality of data obtained from two citizen science games, one a “gamified” version of a species classification task and one a fantasy game that used the classification task only as a way to advance in the game play. Surprisingly, though we did observe cheating in the fantasy game, data quality (i.e., classification accuracy) from participants in the two games was not significantly different. As well, data from short-time contributors was also at a usable level of accuracy. Finally, learning did not seem to affect data quality in our context. These findings suggest that various approaches to gamification can be useful for motivating contributions to citizen science projects.

LanguageEnglish (US)
Pages254-268
Number of pages15
JournalComputers in Human Behavior
Volume68
DOIs
StatePublished - Mar 1 2017

Fingerprint

Data Accuracy
Fantasy
Crowdsourcing
Learning
Research
Cheating
Crowds
Sourcing
Gaming
Intrinsic
Participation

Keywords

  • Citizen science
  • Crowdsourcing
  • Data quality
  • Games
  • Gamification

ASJC Scopus subject areas

  • Arts and Humanities (miscellaneous)
  • Human-Computer Interaction
  • Psychology(all)

Cite this

Gamers, citizen scientists, and data : Exploring participant contributions in two games with a purpose. / Prestopnik, Nathan; Crowston, Kevin; Wang, Jun.

In: Computers in Human Behavior, Vol. 68, 01.03.2017, p. 254-268.

Research output: Research - peer-reviewArticle

@article{dcb443ef8f03480ca889878d2f927265,
title = "Gamers, citizen scientists, and data: Exploring participant contributions in two games with a purpose",
abstract = "Two key problems for crowd-sourcing systems are motivating contributions from participants and ensuring the quality of these contributions. Games have been suggested as a motivational approach to encourage contribution, but attracting participation through game play rather than intrinsic interest raises concerns about the quality of the contributions provided. These concerns are particularly important in the context of citizen science projects, when the contributions are data to be used for scientific research. To assess the validity of concerns about the effects of gaming on data quality, we compare the quality of data obtained from two citizen science games, one a “gamified” version of a species classification task and one a fantasy game that used the classification task only as a way to advance in the game play. Surprisingly, though we did observe cheating in the fantasy game, data quality (i.e., classification accuracy) from participants in the two games was not significantly different. As well, data from short-time contributors was also at a usable level of accuracy. Finally, learning did not seem to affect data quality in our context. These findings suggest that various approaches to gamification can be useful for motivating contributions to citizen science projects.",
keywords = "Citizen science, Crowdsourcing, Data quality, Games, Gamification",
author = "Nathan Prestopnik and Kevin Crowston and Jun Wang",
year = "2017",
month = "3",
doi = "10.1016/j.chb.2016.11.035",
volume = "68",
pages = "254--268",
journal = "Computers in Human Behavior",
issn = "0747-5632",
publisher = "PERGAMON-ELSEVIER SCIENCE LTD",

}

TY - JOUR

T1 - Gamers, citizen scientists, and data

T2 - Computers in Human Behavior

AU - Prestopnik,Nathan

AU - Crowston,Kevin

AU - Wang,Jun

PY - 2017/3/1

Y1 - 2017/3/1

N2 - Two key problems for crowd-sourcing systems are motivating contributions from participants and ensuring the quality of these contributions. Games have been suggested as a motivational approach to encourage contribution, but attracting participation through game play rather than intrinsic interest raises concerns about the quality of the contributions provided. These concerns are particularly important in the context of citizen science projects, when the contributions are data to be used for scientific research. To assess the validity of concerns about the effects of gaming on data quality, we compare the quality of data obtained from two citizen science games, one a “gamified” version of a species classification task and one a fantasy game that used the classification task only as a way to advance in the game play. Surprisingly, though we did observe cheating in the fantasy game, data quality (i.e., classification accuracy) from participants in the two games was not significantly different. As well, data from short-time contributors was also at a usable level of accuracy. Finally, learning did not seem to affect data quality in our context. These findings suggest that various approaches to gamification can be useful for motivating contributions to citizen science projects.

AB - Two key problems for crowd-sourcing systems are motivating contributions from participants and ensuring the quality of these contributions. Games have been suggested as a motivational approach to encourage contribution, but attracting participation through game play rather than intrinsic interest raises concerns about the quality of the contributions provided. These concerns are particularly important in the context of citizen science projects, when the contributions are data to be used for scientific research. To assess the validity of concerns about the effects of gaming on data quality, we compare the quality of data obtained from two citizen science games, one a “gamified” version of a species classification task and one a fantasy game that used the classification task only as a way to advance in the game play. Surprisingly, though we did observe cheating in the fantasy game, data quality (i.e., classification accuracy) from participants in the two games was not significantly different. As well, data from short-time contributors was also at a usable level of accuracy. Finally, learning did not seem to affect data quality in our context. These findings suggest that various approaches to gamification can be useful for motivating contributions to citizen science projects.

KW - Citizen science

KW - Crowdsourcing

KW - Data quality

KW - Games

KW - Gamification

UR - http://www.scopus.com/inward/record.url?scp=84998537728&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84998537728&partnerID=8YFLogxK

U2 - 10.1016/j.chb.2016.11.035

DO - 10.1016/j.chb.2016.11.035

M3 - Article

VL - 68

SP - 254

EP - 268

JO - Computers in Human Behavior

JF - Computers in Human Behavior

SN - 0747-5632

ER -