An L2 Neural Language Model of Adaptation

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

In recent years, the increasing capacities of neural language models (NLMs) have led to a surge in research into their representations of syntactic structures. A wide range of methods have been used to address the linguistic knowledge that NLMs acquire. In the present study, using the syntactic priming paradigm, we explore the extent to which the L2 LSTM NLM is susceptible to syntactic priming, the phenomenon where the syntactic structure of a sentence makes the same structure more probable in a follow-up sentence. In line with the previous work by van Schijndel and Linzen (2018), we provide further evidence for the issue concerned by showing that the L2 LM adapts to abstract syntactic properties of sentences as well as to lexical items. At the same time we report that the addition of a simple adaptation method to the L2 LSTM NLM does not always improve on the NLM’s predictions of human reading times, compared to its non-adaptive counterpart.

Original languageEnglish
Pages (from-to)547-562
Number of pages16
JournalKorean Journal of English Language and Linguistics
Volume22
DOIs
StatePublished - 2022

Keywords

  • adaptation
  • learning rate
  • neural language model
  • perplexity
  • surprisal
  • syntactic priming

Fingerprint

Dive into the research topics of 'An L2 Neural Language Model of Adaptation'. Together they form a unique fingerprint.

Cite this