Do more online instructional ratings lead to better prediction of instructor quality?

Shane Sanders, Bhavneet Walia, Joel Potter, Kenneth W. Linna

Research output: Contribution to journalArticlepeer-review

8 Scopus citations

Abstract

Online instructional ratings are taken by many with a grain of salt. This study analyzes the ability of said ratings to estimate the official (university-administered) instructional ratings of the same respective university instructors. Given self-selection among raters, we further test whether more online ratings of instructors lead to better prediction of official ratings in terms of both R-squared value and root mean squared error. We lastly test and correct for heteroskedastic error terms in the regression analysis to allow for the first robust estimations on the topic. Despite having a starkly different distribution of values, online ratings explain much of the variation in official ratings. This conclusion strengthens, and root mean squared error typically falls, as one considers regression subsets over which instructors have a larger number of online ratings. Though (public) online ratings do not mimic the results of (semi-private) official ratings, they provide a reliable source of information for predicting official ratings. There is strong evidence that this reliability increases in online rating usage.

Original languageEnglish (US)
Pages (from-to)1-6
Number of pages6
JournalPractical Assessment, Research and Evaluation
Volume16
Issue number2
StatePublished - Jan 22 2011
Externally publishedYes

ASJC Scopus subject areas

  • Education

Fingerprint

Dive into the research topics of 'Do more online instructional ratings lead to better prediction of instructor quality?'. Together they form a unique fingerprint.

Cite this