Invited talks

Invited speakers: Kevin Knight, Catherine Pelachaud, Sebastian Riedel


Kevin Knight, ISI, University of Southern California, USA

Title:
How Much Information Does a Human Translator Add to the Original?

Abstract:
It is well-known that natural language has built-in redundancy. By using context, we can often guess the next word or character in a text. Two practical communities have independently exploited this fact. First, automatic speech and translation researchers build language models to distinguish fluent from non-fluent outputs. Second, text compression researchers convert predictions into short encodings, to save disk space and bandwidth. I will explore what these two communities can learn from each others' (interestingly different) solutions. Then I will look at the less-studied question of redundancy in bilingual text, addressing questions like "How well can we predict human translator behavior?" and "How much information does a human translator add to the original?" (This is joint work with Barret Zoph and Marjan Ghazvininejad.)

Bio:
Kevin Knight is Director of Natural Language Technologies at the Information Sciences Institute (ISI) of the University of Southern California (USC), and a Professor in the USC Computer Science Department. He received a PhD in computer science from Carnegie Mellon University and a bachelor's degree from Harvard University. Prof. Knight’s research interests include machine translation, automata theory, and decipherment of historical manuscripts. Prof. Knight co-wrote the textbook "Artificial Intelligence", served as President of the Association for Computational Linguistics, and was a co-founder of the machine translation company Language Weaver, Inc. He is a Fellow of the Association for the Advancement of Artificial Intelligence (AAAI), the Association for Computational Linguistics (ACL), and the Information Sciences Institute (ISI).


Catherine Pelachaud, CNRS-LTCI, TELECOM-ParisTech, France

Title:
Modeling socio-emotional humanoid agent

Abstract:
In this talk I will present our current work toward endowing virtual agents with socio-emotional capabilities. I will start describing an interactive system of an agent dialoging with human users in an emotionally colored manner. Through its behaviors, the agent can sustain a conversation as well as show various attitudes and levels of engagement. I will present our latest work on laughter. I will address several issues such as: how to animate laughter in a virtual agent looking particularly at rhythmic movements; how to laugh with human participant and how laughing agent is perceived.

Bio:
Catherine Pelachaud is a Director of Research at CNRS in the laboratory LTCI, TELECOM ParisTech. Her research interest includes embodied conversational agent, nonverbal communication (face, gaze, and gesture), expressive behaviors and socio-emotional agents. She is associate editors of several journals among which IEEE Transactions on Affective Computing, ACM Transactions on Interactive Intelligent Systems and Journal on Multimodal User Interfaces. She has co-edited several books on virtual agents and emotion-oriented systems.


Sebastian Riedel, University College London, UK

Title:
Embedding Probabilistic Logic for Machine Reading

Abstract:
We want to build machines that read, and make inferences based on what was read. A long line of the work in the field has focussed on approaches where language is converted (possibly using machine learning) into a symbolic and relational representation. A reasoning algorithm (such as a theorem prover) then derives new knowledge from this representation. This allows for rich knowledge to captured, but generally suffers from two problems: acquiring sufficient symbolic background knowledge and coping with noise and uncertainty in data. Probabilistic logics (such as Markov Logic) offer a solution, but are known to often scale poorly.
 
In recent years a third alternative emerged: latent variable models in which entities and relations are embedded in vector spaces (and represented "distributional"). Such approaches scale well and are robust to noise, but they raise their own set of questions: What type of inferences do they support? What is a proof in embeddings? How can explicit background knowledge be injected into embeddings? In this talk I first present our work on latent variable models for machine reading, using ideas from matrix factorisation as well as both closed and open information extraction. Then I will present recent work we conducted to address the questions of injecting and extracting symbolic knowledge into/from models based on embeddings. In particular, I will show how one can rapidly build accurate relation extractors through combining logic and embeddings.

Bio:
Dr. Riedel is a Senior Lecturer in the Department of Computer Science at University College London, leading the Machine Reading lab. He received his MSc and Ph.D. (in 2009) in Computer Science from the University of Edinburgh. He was a researcher at the University of Tokyo, and a postdoc with Andrew McCallum at the University of Massachusetts Amherst. He is an Allen Distinguished Investigator, a Marie Curie CIG fellow, was a finalist for the Microsoft Research Faculty Award in 2013 and recently received a Google Focused Research award. Sebastian is generally interested in the intersection of NLP and machine learning, and particularly interested in teaching machines to read, and to reason with what was read.