Rubrics to assess information literacy: An examination of methodology and interrater reliability

Research output: Contribution to journalArticlepeer-review

94 Scopus citations


Academic librarians seeking to assess information literacy skills often focus on testing as a primary means of evaluation. Educators have long recognized the limitations of tests, and these limitations cause many educators to prefer rubric assessment to test-based approaches to evaluation. In contrast, many academic librarians are unfamiliar with the benefits of rubrics. Those librarians who have explored the use of information literacy rubrics have not taken a rigorous approach to methodology and interrater reliability. This article seeks to remedy these omissions by describing the benefits of a rubric-based approach to information literacy assessment, identifying a methodology for using rubrics to assess information literacy skills, and analyzing the interrater reliability of information literacy rubrics in the hands of university librarians, faculty, and students. Study results demonstrate that Cohen's ? can be effectively employed to check interrater reliability. The study also indicates that rubric training sessions improve interrater reliability among librarians, faculty, and students.

Original languageEnglish (US)
Pages (from-to)969-983
Number of pages15
JournalJournal of the American Society for Information Science and Technology
Issue number5
StatePublished - May 2009
Externally publishedYes

ASJC Scopus subject areas

  • Software
  • Information Systems
  • Human-Computer Interaction
  • Computer Networks and Communications
  • Artificial Intelligence


Dive into the research topics of 'Rubrics to assess information literacy: An examination of methodology and interrater reliability'. Together they form a unique fingerprint.

Cite this