At this week’s lab meeting, Spandana Gella, research scientist at Amazon AI, will present on Robust Natural Language Processing with Multi-task Learning.
- Wednesday, April 22nd, at 14:00 UTC-4 (note the time change), via Zoom.
In recent years, we have seen major improvements to various Natural Language Processing tasks. Despite their human-level performance on benchmarking datasets, recent studies have shown that these models are vulnerable to adversarial examples. It is shown that these models are relying on spurious correlations that hold for the majority of examples and suffer from distribution shifts and fail on atypical or challenging test sets. Recent work has shown that large pre-trained models improve model robustness to spurious associations in the training data. We observe that superior performance of large pre-trained language models comes from their better generalization from a minority of training examples that resemble the challenging sets. Our study shows that multi-task learning with the right auxiliary tasks improves accuracy on adversarial examples without hurting in distribution performance. We show that this holds true for multi-modal task of Referring Expression Recognition and text-only tasks of Natural language inference and Paraphrase identification.
Spandana Gella is a Research Scientist at Amazon AI. She obtained her PhD from the University of Edinburgh. Her expertise is in the intersection of natural language processing and computer vision. Specifically, her work focuses on using visual information as a bridge between languages to learn representations and generating descriptions from images and videos with applications in accessibility. During her PhD, she interned at Microsoft Research, Redmond and Facebook AI Research. Prior to her PhD, she worked as a research assistant at Microsoft Research, India and Xerox Research Centre in Europe. She is an active member of the academic community serving as an area chair at EMNLP 2020, reviewer/program committee member (ACL, NAACL, EMNLP, EACL) and organizing workshops at NLP conferences (RepL4NLP 2017-2020, Student Research workshop 2017, SiVL 2018-2019) .