Published: Jan. 29, 2021

Rico Sennrich, Professor of Computational Linguistics, University of Zurich

Lessons from Multilingual Machine Translation


Neural models have brought rapid advances to the field of machine translation, and have also opened up new opportunities. One of these is the training of machine translation models in two or more translation directions to transfer knowledge between languages, potentially even allowing for zero-shot translation in directions with no parallel training data. However, multilingual modelling also brings new challenges and questions: how can we represent multiple languages and alphabets with a compact vocabulary of symbols? Does multilingual modelling scale to many languages, and at which point does model capacity become a bottleneck? How can we increase the reliability of zero-shot translation? In this talk, I will discuss recent research and open problems in multilingual machine translation.


Bio: Rico Sennrich is an SNSF Professor at the University of Zurich and Lecturer at the University of Edinburgh. His recent research has received funding from the SNSF, ERC, the Royal Society, and industry collaborations, and has focused on machine learning for natural language processing, specifically high-quality machine translation, transfer learning, and multilingual models.