Abstract
Academic librarians seeking to assess information literacy skills often focus on testing as a primary means of evaluation. Educators have long recognized the limitations of tests, and these limitations cause many educators to prefer rubric assessment to test-based approaches to evaluation. In contrast, many academic librarians are unfamiliar with the benefits of rubrics. Those librarians who have explored the use of information literacy rubrics have not taken a rigorous approach to methodology and interrater reliability. This article seeks to remedy these omissions by describing the benefits of a rubric-based approach to information literacy assessment, identifying a methodology for using rubrics to assess information literacy skills, and analyzing the interrater reliability of information literacy rubrics in the hands of university librarians, faculty, and students. Study results demonstrate that Cohen's ? can be effectively employed to check interrater reliability. The study also indicates that rubric training sessions improve interrater reliability among librarians, faculty, and students.
Original language | English (US) |
---|---|
Pages (from-to) | 969-983 |
Number of pages | 15 |
Journal | Journal of the American Society for Information Science and Technology |
Volume | 60 |
Issue number | 5 |
DOIs | |
State | Published - May 2009 |
Externally published | Yes |
ASJC Scopus subject areas
- Software
- Information Systems
- Human-Computer Interaction
- Computer Networks and Communications
- Artificial Intelligence