TY - JOUR
T1 - On Pronoun Prediction in the L2 Neural Language Model*
AU - Choi, Sunjoo
AU - Park, Myung Kwan
N1 - Publisher Copyright:
© 2023 KASELL. All rights reserved.
PY - 2023
Y1 - 2023
N2 - In recent years, artificial neural(-network) language models (LMs) have achieved remarkable success in tasks involving sentence processing. However, despite leveraging the advantages of pre-trained neural LMs, our understanding of the specific syntactic knowledge acquired by these models during processing remains limited. This study aims to investigate whether L2 neural LMs trained on L2 English leaners’ textbooks can acquire syntactic knowledge similar to that of humans. Specifically, we examine the L2 LM’s ability to predict pronouns within the framework of previous experiments conducted with L1 humans and L1 LMs. Our focus is on pronominal coreference, a well-studied linguistic phenomenon in psycholinguistics that has been extensively investigated. This research expands on existing studies by exploring whether the L2 LM can learn Binding Condition B, a fundamental aspect of pronominal agreement. We replicate several previous experiments and examine the L2 LM’s capacity to exhibit human-like behavior in pronominal agreement effects. Consistent with the findings of Davis (2022), we provide further evidence that, like L1 LMs, the L2 LM fails to fully capture the range of behaviors associated with Binding Condition B, in comparison to L1 humans. Overall, neural LMs face challenges in recognizing the complete spectrum of Binding Condition B and are limited to capturing aspects of it only in specific contexts.
AB - In recent years, artificial neural(-network) language models (LMs) have achieved remarkable success in tasks involving sentence processing. However, despite leveraging the advantages of pre-trained neural LMs, our understanding of the specific syntactic knowledge acquired by these models during processing remains limited. This study aims to investigate whether L2 neural LMs trained on L2 English leaners’ textbooks can acquire syntactic knowledge similar to that of humans. Specifically, we examine the L2 LM’s ability to predict pronouns within the framework of previous experiments conducted with L1 humans and L1 LMs. Our focus is on pronominal coreference, a well-studied linguistic phenomenon in psycholinguistics that has been extensively investigated. This research expands on existing studies by exploring whether the L2 LM can learn Binding Condition B, a fundamental aspect of pronominal agreement. We replicate several previous experiments and examine the L2 LM’s capacity to exhibit human-like behavior in pronominal agreement effects. Consistent with the findings of Davis (2022), we provide further evidence that, like L1 LMs, the L2 LM fails to fully capture the range of behaviors associated with Binding Condition B, in comparison to L1 humans. Overall, neural LMs face challenges in recognizing the complete spectrum of Binding Condition B and are limited to capturing aspects of it only in specific contexts.
KW - binding condition
KW - coreference
KW - neural language model
KW - pronoun
KW - surprisal
UR - http://www.scopus.com/inward/record.url?scp=85166278554&partnerID=8YFLogxK
U2 - 10.15738/kjell.23..202306.482
DO - 10.15738/kjell.23..202306.482
M3 - Article
AN - SCOPUS:85166278554
SN - 1598-1398
VL - 23
SP - 482
EP - 497
JO - Korean Journal of English Language and Linguistics
JF - Korean Journal of English Language and Linguistics
ER -