May 3, 2018
Language-based interaction with digital agents (e.g. Siri, Alexa) has become ubiquitous, and is used in various situations and by an increasingly large variety of different users. Research shows however that a dialog system should not just be able to understand and generate language correctly, but that it should also adapt the way it formulates its messages to fit the user and the situation (for instance, it should use simpler formulations to avoid distraction during driving).
In this talk, I will start out by presenting an information-theoretic measure, surprisal, as a way of quantifying linguistically induced cognitive load on a word-by-word basis. I will then proceed to talk about neural network models that we have recently developed to estimate semantic surprisal, i.e. the amount of cognitive load that will be caused by an unexpected word like "bathtub" in context, such as "I did the dishes in the bathtub.".
Finally, I will report on our recent work using a novel pupillometry-based measure of cognitive load, the Index of Cognitive Activity (ICA), which allows us to assess cognitive load in dual task settings such as driving a car.
is a professor for Computer Science and Computational Linguistics at Saarland University. She holds a Diplom in Computational Linguistics from Stuttgart University and an MSc in Artificial Intelligence from the University of Edinburgh. After her PhD (2010, from the school of Informatics, University of Edinburgh), she joined Saarland University for an independent research group leader position within the Cluster of Excellence "Multimodal Computing and Interaction". Vera Demberg's research is concerned with the question of how humans process language, and how we must construct automated systems that can process language and adapt to humans. Her research methodology is highly interdisciplinary spanning cognitive science, computational linguistics and computer science.