Portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 2 
Published in DMR, 2023
We present a conversion scheme between two multilingual semantic annotation frameworks, the Semantic Network of Adposition and Case Supersenses (SNACS) and the Prague Czech-English Dependency Treebank. ocusing on prepositional semantics, we find considerable one-to-one overlaps between labels of the schemas, but find that the many supersenses in the configuration branch of the SNACS framework does not correspond well to any set of tags in PCEDT.
Published in LREC-COLING, 2024
We present a first attempt at using Universal Dependencies to annotate a range of constructions (as defined by Construction Grammar) using queries over edges and morphological lasbels in UD. We present an initial pilot across 5 constructions and 10 languages.
Published in EMNLP, 2024
We present GDTB, a genre diverse discourse relation resource in the Penn Discourse Treebank (PDTB) style. We present empirical evaluations on the new resource, showing that mixed training on PDTB and GDTB leads to optimal performance.
Published in COLING, 2025
We present the first set of multilingual fine-tuning and evaluations for the SNACS framework. We reveal, via a comparative analysis on parallel data, that the relative frequencies for different supersenses are highly dependent on language. Using data from 5 languages and a robust hyperparameter sweep, we substantially outperform the state-of-the-art on SNACS classification, finding that optimal performance is achieved by joint fine-tuning on all languages at once.
Published in CoNLL, 2025
We present a set of probing experiments targeting both the syntactic and semantic properties of the Noun-Preposition-Noun (NPN) construction. Using a set of manually annotated data extracted from the Corpus of Contemporary American English (COCA), we find that probes trained over BERT representations are sensitive to both the form and function of the construction.
Published in EMNLP, 2025
We evaluate human-scale (BabyLM) language models on the extremely rare let-alone construction, finding that they master a range of syntactic properties, but are not sensitive to the construction’s semantics. We then perform a set of Filtered Corpus Training (FiCT), finding robust performance on constructional syntax even in the absence of direct observation of let-alone or related Paired Focus and Comparative Constructions.
Published:
Research Talk to SynSem Lab at CU Boulder.
Published:
Research Talk in Zoey Liu’s lab at University of Florida.
Teaching Assistant, Georgetown University, 1900
Teaching Assistant, Georgetown University, 1900
Teaching Assistant, University of North Texas, 1900
Teaching Assistant, University of North Texas, 1900
Teaching Assistant, Georgetown University, 1900
Teaching Assistant, Georgetown University, 1900
Teaching Assistant, University of North Texas, 1900
Teaching Assistant, University of North Texas, 1900
Instructor of Record, Georgetown University, 1900