At this week’s MCQLL meeting, we have two speakers. Gaurav Kamath will give a talk titled How do Neural Network-Based Language Models Handle Scope Ambiguities? and Ben LeBrun will give a talk titled Modeling Incremental Reference Resolution. Abstracts below.

When:
Tuesday, November 1, 15h00–16h00 (Montréal time, UTC-4)
Where:
MCQLL meetings this semester are in hybrid format. We will meet in-person in room 117 of the McGill Linguistics Department, 1085 Dr-Penfield. If you’d like to attend virtually, Zoom meetings will be held here.

All are welcome to attend.

  • Speaker:
    Gaurav Kamath.
    Title:
    How do Neural Network-Based Language Models Handle Scope Ambiguities?
    Abstract:

    In recent years, neural network-based language models (NNLMs) have achieved human-like performance on a range of tasks designed to test natural language understanding (NLU). NNLMs are complex statistical models that are trained, through deep learning methods on massive amounts of unannotated text, to generate or classify sequences of natural language. Their high performance on NLU tasks has led to claims of these models ‘understanding’ natural language; as a result of the methods by which they are trained, however, little is known about the actual linguistic structures they capture. Recent research has shown, for example, that they can struggle with negation and quantification. In this talk, I present ongoing work aimed at investigating the capacity of NNLMs to capture one such linguistic structure: semantic scope. Focusing on scope ambiguities generated by quantifiers, negation, modality, quantificational adverbs and intensional verbs, I will cover the experimental set-up I aim to use, as well as the results of a very preliminary study applying this approach to human participants.

  • Speaker:
    Ben LeBrun.
    Title:
    Modeling Incremental Reference Resolution
    Abstract:

    Humans can identify the referent of a referring expression in real-time, given only partial and potentially ambiguous input. This talk will present ongoing work defining a computational model of incremental, visually-grounded language understanding. I will begin by discussing psycholinguistic experiments which offer insight into how humans incrementally establish reference. I will then outline a model of incremental reference resolution, and show that it predicts the behaviour of human participants in psycholinguistic experiments.