On Pronoun Prediction in the L2 Neural Language Model*

Sunjoo Choi, Myung Kwan Park

Research output: Contribution to journalArticlepeer-review

Abstract

In recent years, artificial neural(-network) language models (LMs) have achieved remarkable success in tasks involving sentence processing. However, despite leveraging the advantages of pre-trained neural LMs, our understanding of the specific syntactic knowledge acquired by these models during processing remains limited. This study aims to investigate whether L2 neural LMs trained on L2 English leaners’ textbooks can acquire syntactic knowledge similar to that of humans. Specifically, we examine the L2 LM’s ability to predict pronouns within the framework of previous experiments conducted with L1 humans and L1 LMs. Our focus is on pronominal coreference, a well-studied linguistic phenomenon in psycholinguistics that has been extensively investigated. This research expands on existing studies by exploring whether the L2 LM can learn Binding Condition B, a fundamental aspect of pronominal agreement. We replicate several previous experiments and examine the L2 LM’s capacity to exhibit human-like behavior in pronominal agreement effects. Consistent with the findings of Davis (2022), we provide further evidence that, like L1 LMs, the L2 LM fails to fully capture the range of behaviors associated with Binding Condition B, in comparison to L1 humans. Overall, neural LMs face challenges in recognizing the complete spectrum of Binding Condition B and are limited to capturing aspects of it only in specific contexts.

Original languageEnglish
Pages (from-to)482-497
Number of pages16
JournalKorean Journal of English Language and Linguistics
Volume23
DOIs
StatePublished - 2023

Keywords

  • binding condition
  • coreference
  • neural language model
  • pronoun
  • surprisal

Fingerprint

Dive into the research topics of 'On Pronoun Prediction in the L2 Neural Language Model*'. Together they form a unique fingerprint.

Cite this