Unpacking Let Alone: Human-Scale Models Generalize to a Rare Construction in Form but not Meaning

Published in EMNLP, 2025

We evaluate human-scale (BabyLM) language models on the extremely rare let-alone construction, finding that they master a range of syntactic properties, but are not sensitive to the construction’s semantics. We then perform a set of Filtered Corpus Training (FiCT), finding robust performance on constructional syntax even in the absence of direct observation of let-alone or related Paired Focus and Comparative Constructions.