Relations between syntactic encoding and co-speech gestures: Implications for a model of speech and gesture production

Sotaro Kita, Asli Özyürek, Shanley Allen, Amanda Brown, Reyhan Furman, Tomoko Ishizuka

Research output: Contribution to journalArticlepeer-review

87 Scopus citations


Gestures that accompany speech are known to be tightly coupled with speech production. However little is known about the cognitive processes that underlie this link. Previous cross-linguistic research has provided preliminary evidence for online interaction between the two systems based on the systematic co-variation found between how different languages syntactically package Manner and Path information of a motion event and how gestures represent Manner and Path. Here we elaborate on this finding by testing whether speakers within the same language gesturally express Manner and Path differently according to their online choice of syntactic packaging of Manner and Path, or whether gestural expression is pre-determined by a habitual conceptual schema congruent with the linguistic typology. Typologically congruent and incongruent syntactic structures for expressing Manner and Path (i.e., in a single clause or multiple clauses) were elicited from English speakers. We found that gestural expressions were determined by the online choice of syntactic packaging rather than by a habitual conceptual schema. It is therefore concluded that speech and gesture production processes interface online at the conceptual planning phase. Implications of the findings for models of speech and gesture production are discussed.

Original languageEnglish (US)
Pages (from-to)1212-1236
Number of pages25
JournalLanguage and Cognitive Processes
Issue number8
StatePublished - Dec 2007
Externally publishedYes

ASJC Scopus subject areas

  • Experimental and Cognitive Psychology
  • Language and Linguistics
  • Education
  • Linguistics and Language


Dive into the research topics of 'Relations between syntactic encoding and co-speech gestures: Implications for a model of speech and gesture production'. Together they form a unique fingerprint.

Cite this