Abstract
In recent years, the increasing capacities of neural language models (NLMs) have led to a surge in research into their representations of syntactic structures. A wide range of methods have been used to address the linguistic knowledge that NLMs acquire. In the present study, using the syntactic priming paradigm, we explore the extent to which the L2 LSTM NLM is susceptible to syntactic priming, the phenomenon where the syntactic structure of a sentence makes the same structure more probable in a follow-up sentence. In line with the previous work by van Schijndel and Linzen (2018), we provide further evidence for the issue concerned by showing that the L2 LM adapts to abstract syntactic properties of sentences as well as to lexical items. At the same time we report that the addition of a simple adaptation method to the L2 LSTM NLM does not always improve on the NLM’s predictions of human reading times, compared to its non-adaptive counterpart.
| Original language | English |
|---|---|
| Pages (from-to) | 547-562 |
| Number of pages | 16 |
| Journal | Korean Journal of English Language and Linguistics |
| Volume | 22 |
| DOIs | |
| State | Published - 2022 |
Keywords
- adaptation
- learning rate
- neural language model
- perplexity
- surprisal
- syntactic priming
Fingerprint
Dive into the research topics of 'An L2 Neural Language Model of Adaptation'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver