This week’s lab meeting will feature a talk by Tom McCoy.
- Tuesday, October 26, 15:00–16:00 (Montreal time, UTC-4).
- Meetings are via Zoom. If you would like to attend the talk but have not yet signed up for the MCQLL meetings this semester, please send an email to firstname.lastname@example.org.
Neural networks excel at processing language, yet their inner workings are poorly understood. One particular puzzle is how these models can represent compositional structures (e.g., sequences or trees) within the continuous vectors that they use as their representations. We introduce an analysis technique called DISCOVER and use it to show that, when neural networks are trained to perform symbolic tasks, their vector representations can be closely approximated using a simple, interpretable type of symbolic structure. That is, even though these models have no explicit compositional representations, they still implicitly implement compositional structure. We verify the causal importance of the discovered symbolic structure by showing that, when we alter a model’s internal representations in ways motivated by our analysis, the model’s output changes accordingly.