At this week’s MCQLL meeting, Laurestine Bradford will be presenting **_Evaluating and Extending a Compositional Semantics for Dependency Syntax, and Eva Portelance will present Learning the meanings of ‘hard’ words like more and or may not be so hard after all. Abstracts follow.

When:
Tuesday, January 24, 15:00–16:00 (Montréal time, UTC-4)
Where:
MCQLL meetings this semester are in hybrid format. We will meet in-person in room 117 of the McGill Linguistics Department, 1085 Dr-Penfield. If you’d like to attend virtually, the Zoom link is here.

All are welcome to attend.

  • Speaker:
    Laurestine Bradford
    Title:
    Evaluating and Extending a Compositional Semantics for Dependency Syntax
    Abstract:

    There are many more corpora annotated with syntactic information than with representations of semantics, but even this syntactic annotation already contains information about meaning. We seek systematic links between syntactic structures and meaning that can be used add rough representations of compositional meaning structure to a corpus based on previously-annotated syntactic data. In this talk, I will present preliminary results evaluating the ability of a lambda-calculus-based framework to derive correct meanings from Universal Dependencies syntax parses on a subset of the English-language Parallel Meaning Bank. I will compare these results to related recent work, and give indications of how the framework is currently being improved to increase its coverage.

  • Speaker:
    Eva Portelance
    Title:
    Learning the meanings of ‘hard’ words like more and or may not be so hard after all.
    Abstract:

    Function words may be one of the hardest parts on language to learn since understanding their meaning requires developing complex reasoning skills, like logical, numerical, spatial, and relational reasoning. Using both linguistic and visual context, Visual question answering (VQA) models learn to reason about abstract relations making them a good testing bed for function word learning. Given the abstract nature and complexity of these words, they represent a great test case for whether a non-symbolic learning algorithm can acquire these from data. In this paper, we propose to study how VQA models learn function words to better understand how the meanings of these words can be learnt both by models and children. Using this approach, we show that recurrent VQA models trained on visually grounded language learn gradient semantics as opposed to threshold-based semantics for a series of function words requiring spacial and numerical reasoning. Furthermore, we find that these models can learn the meanings of logical connectives ‘and’ and ‘or’ without any prior knowledge of logical reasoning. Finally, we show that word learning difficulty is dependent on their frequency in models’ input. Our findings offer evidence that (1) it is possible to learn the meanings of these words in visually grounded context by using non-symbolic general learning algorithms, without any prior knowledge of linguistic meaning, supporting a usage-based theories of their acquisition in children over innateness proposals. Additionally, our results confirm that (2) the order in which children tend to learn these words may be dependent on their frequency in the input rather than other inherent word properties.