TY - JOUR
T1 - Perceptual optimization of language
T2 - Evidence from American Sign Language
AU - Caselli, Naomi
AU - Occhino, Corrine
AU - Artacho, Bruno
AU - Savakis, Andreas
AU - Dye, Matthew
N1 - Funding Information:
This material is based upon work supported by the National Science Foundation under Grant No. BCS 1749376 . Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
Publisher Copyright:
© 2022 The Author(s)
PY - 2022/7
Y1 - 2022/7
N2 - If language has evolved for communication, languages should be structured such that they maximize the efficiency of processing. What is efficient for communication in the visual-gestural modality is different from the auditory-oral modality, and we ask here whether sign languages have adapted to the affordances and constraints of the signed modality. During sign perception, perceivers look almost exclusively at the lower face, rarely looking down at the hands. This means that signs articulated far from the lower face must be perceived through peripheral vision, which has less acuity than central vision. We tested the hypothesis that signs that are more predictable (high frequency signs, signs with common handshapes) can be produced further from the face because precise visual resolution is not necessary for recognition. Using pose estimation algorithms, we examined the structure of over 2000 American Sign Language lexical signs to identify whether lexical frequency and handshape probability affect the position of the wrist in 2D space. We found that frequent signs with rare handshapes tended to occur closer to the signer's face than frequent signs with common handshapes, and that frequent signs are generally more likely to be articulated further from the face than infrequent signs. Together these results provide empirical support for anecdotal assertions that the phonological structure of sign language is shaped by the properties of the human visual and motor systems.
AB - If language has evolved for communication, languages should be structured such that they maximize the efficiency of processing. What is efficient for communication in the visual-gestural modality is different from the auditory-oral modality, and we ask here whether sign languages have adapted to the affordances and constraints of the signed modality. During sign perception, perceivers look almost exclusively at the lower face, rarely looking down at the hands. This means that signs articulated far from the lower face must be perceived through peripheral vision, which has less acuity than central vision. We tested the hypothesis that signs that are more predictable (high frequency signs, signs with common handshapes) can be produced further from the face because precise visual resolution is not necessary for recognition. Using pose estimation algorithms, we examined the structure of over 2000 American Sign Language lexical signs to identify whether lexical frequency and handshape probability affect the position of the wrist in 2D space. We found that frequent signs with rare handshapes tended to occur closer to the signer's face than frequent signs with common handshapes, and that frequent signs are generally more likely to be articulated further from the face than infrequent signs. Together these results provide empirical support for anecdotal assertions that the phonological structure of sign language is shaped by the properties of the human visual and motor systems.
KW - American Sign Language
KW - Language optimization
KW - Language perception
KW - Language production
KW - Pose estimation
UR - http://www.scopus.com/inward/record.url?scp=85124795277&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85124795277&partnerID=8YFLogxK
U2 - 10.1016/j.cognition.2022.105040
DO - 10.1016/j.cognition.2022.105040
M3 - Article
C2 - 35192994
AN - SCOPUS:85124795277
SN - 0010-0277
VL - 224
JO - Cognition
JF - Cognition
M1 - 105040
ER -