Language Models Learn Constructional Semantics, Not To Mention Syntax: Investigating LM Understanding of Paired-Focus Constructions

Published in CoNLL, 2026

We test LMs’ abilities to comprehend the meaning of Paired-Focus constructions. After designing a new test suite, we find that smaller models than previously believed can grasp the meaning of these constructions, though “human-scale” BabyLMs still fail at the task, and analysis of training dynamics reveals that meaning is learned far later than the form of the constructions.