LOT Winter School 2019

RM1 - Methods in Multimodal Language

Asli Ozyurek

Contact

Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525 JT , Nijmegen

asliozu@mpi.nl

https://www.mpi.nl/people/ozyurek-asli

https://www.ru.nl/mlc/





Course description

Level: RM1 (First year Research Master Linguistics)

This course will be a hands-on introduction to methods that can be used to study language and communication in its multimodal context (speech, gesture, sign). Research in the last 20 years has shown that dynamic and three-dimensional visible movements (gestures, face, eye gaze etc.) are an integral part of language both in spoken as well as sign languages. It is not trivial however how to study visible aspects of language as most linguistic and psycholinguistic methods rely on speech and or text. In this course we will go through ways to collect, code, quantify and analyze multimodal aspects of language use. The coding tool ELAN for multimodal video annotation will be introduced. We will consider different approaches such as corpus versus experimental and introduce challenges of studying production versus comprehension as well as interactional data. Qualitative and quantitative approaches as well as of eye tracking method to reveal multimodal processing will be introduced. Finally we will introduce methods to study child language development from multimodal perspectives. Students will have hands on experience in these methods, learn to analyze video data and will be ready to come up with their own studies.

The course will be co-taught with the contributions of the following researchers: Marlou Rasenberg, James Trujillo, Francie Manhardt, and Beyza Sumer (members of Multimodal Language and Cognition Lab, CLS, Donders Institute,Radboud University Nijmegen)

Day-to-day program

Monday: (Ozyurek, Sumer) Why should we study language multimodally?: Theoretical and empirical considerations Tools for collecting, annotating, coding and quantifying multimodal data (intro to ELAN)

Tuesday: (Rasenberg, Ozyurek) Studying multimodal language use during interaction

Wednesday: (Trujillo, Ozyurek) Technological advances in studying multimodal language (Kinect/Video based automated methods, machine learning using neural networks)

Thursday: (Manhardt, Ozyurek) Use of eye tracking for studying multimodal language use and processing

Friday: (Sumer, Ozyurek) Methods for studying sign language and multimodal language development

Reading list

Background and preparatory readings (obligatory):

Abner, Cooperrider, Goldin-Meaodw. (2015) Gesture for Linguists: A Handy Primer. Language and Linguistics Compass

Perniss (2018) .Why we should study language multimodal language. Frontiers In Psychology

Course readings (obligatory):

Lecture 1:

McNeill, 1992. Hand and Mind. Chapter 1. University of Chicago Press

Kita, Sotaro, Gijn, Ingeborg van and Hulst, Harry van der (1998) Movement phases in signs and co-speech gestures, and their transcription by human coders. In: Wachsmuth, Ipke and Fröhlich, Martin, (eds.) Gesture and sign language in human-computer interaction : international gesture workshop Bielefeld, Germany, September 17–19, 1997 : proceedings. Lecture notes in computer science (Volume 1371). Berlin ; London: Springer, pp. 23-35.

Lecture 2:

Holler,J. and Wilkin (2011). Cospeech Gesture Mimicry in the Process of Colloborative Referring During Face–to-Face Dialogue. J of Nonverbal Behavior, 35: 133-153

Brone, G. and Oben, B. (2015) InSight Interaction: A multimodal and multifocal dialigue corpus . Lang Resources and Evaluation 49:195–214

Lecture 3:

Trujillo, J. P., Vaitonyte, J., Simanova, I., & Ozyurek, A. (2018). Toward the markerless and automatic analysis of kinematic features: a toolkit for gesture and movement research. Behavior Research Methods. Advance online publication. doi:10.3758/s13428-018-1086-8.

Lecture 4:

Nazareth, Alina & Odean, Rosalie & Pruden, Shannon. (2019). The Use of Eye-Tracking in Spatial Thinking Research: Concepts, Methodologies, Tools, and Applications. 10.4018/978-1-5225-7507-8.ch029

Gullberg, M., & Holmqvist, K. (2006). What speakers do and what addressees look at: Visual attention to gestures in human interaction live and on video. Pragmatics & Cognition, 14(1), 53-82.

Lecture 5:

Lieberman, A and Mayberry,R. (2015). Stduying Sign Language Acquisition: Research Methods in Sign Language Studies: A Practical Guide. Orfanidou et al. (eds) . John Wiley and Sons

Perniss, P. ( 2015). Collecting and Analyzing Sign Language Data : Video Requirements and Use of Annotation Software. Research Methods in Sign Language Studies: A Practical Guide. Orfanidou et al. (eds) . John Wiley and Sons