TY - JOUR
T1 - Understanding user sensemaking in fairness and transparency in algorithms
T2 - algorithmic sensemaking in over-the-top platform
AU - Shin, Donghee
AU - Lim, Joon Soo
AU - Ahmad, Norita
AU - Ibahrine, Mohammed
N1 - Funding Information:
This project has been funded by the Office of Research and the Institute for Social and Economic Research at Zayed University (The Policy Research Incentive Program 2022). It also received the support from Provost's Research Fellowship Award of Zayed University (R21050/2022).
Publisher Copyright:
© 2022, The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature.
PY - 2022
Y1 - 2022
N2 - A number of artificial intelligence (AI) systems have been proposed to assist users in identifying the issues of algorithmic fairness and transparency. These AI systems use diverse bias detection methods from various perspectives, including exploratory cues, interpretable tools, and revealing algorithms. This study explains the design of AI systems by probing how users make sense of fairness and transparency as they are hypothetical in nature, with no specific ways for evaluation. Focusing on individual perceptions of fairness and transparency, this study examines the roles of normative values in over-the-top (OTT) platforms by empirically testing their effects on sensemaking processes. A mixed-method design incorporating both qualitative and quantitative approaches was used to discover user heuristics and to test the effects of such normative values on user acceptance. Collectively, a composite concept of transparent fairness emerged around user sensemaking processes and its formative roles regarding their underlying relations to perceived quality and credibility. From a sensemaking perspective, this study discusses the implications of transparent fairness in algorithmic media platforms by clarifying how and what should be done to make algorithmic media more trustable and reliable platforms. Based on the findings, a theoretical model is developed to define transparent fairness as an essential algorithmic attribute in the context of OTT platforms.
AB - A number of artificial intelligence (AI) systems have been proposed to assist users in identifying the issues of algorithmic fairness and transparency. These AI systems use diverse bias detection methods from various perspectives, including exploratory cues, interpretable tools, and revealing algorithms. This study explains the design of AI systems by probing how users make sense of fairness and transparency as they are hypothetical in nature, with no specific ways for evaluation. Focusing on individual perceptions of fairness and transparency, this study examines the roles of normative values in over-the-top (OTT) platforms by empirically testing their effects on sensemaking processes. A mixed-method design incorporating both qualitative and quantitative approaches was used to discover user heuristics and to test the effects of such normative values on user acceptance. Collectively, a composite concept of transparent fairness emerged around user sensemaking processes and its formative roles regarding their underlying relations to perceived quality and credibility. From a sensemaking perspective, this study discusses the implications of transparent fairness in algorithmic media platforms by clarifying how and what should be done to make algorithmic media more trustable and reliable platforms. Based on the findings, a theoretical model is developed to define transparent fairness as an essential algorithmic attribute in the context of OTT platforms.
KW - Algorithmic credibility
KW - Algorithmic information processing
KW - Algorithmic normative values
KW - Algorithmic sensemaking
KW - OTT platforms
KW - Transparent fairness
UR - http://www.scopus.com/inward/record.url?scp=85133355576&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85133355576&partnerID=8YFLogxK
U2 - 10.1007/s00146-022-01525-9
DO - 10.1007/s00146-022-01525-9
M3 - Article
AN - SCOPUS:85133355576
SN - 0951-5666
JO - AI and Society
JF - AI and Society
ER -