(AL)BERT Down the Garden Path: Psycholinguistic Experiments for Pre-trained Language Models

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

This study compared the syntactic capabilities of several neural language models (LMs) including Transformers (BERT / ALBERT) and LSTM and investigated whether they exhibit human-like syntactic representations through a targeted evaluation approach, a method to evaluate the syntactic processing ability of LMs using sentences designed for psycholinguistic experiments. By employing garden-path structures with several linguistic manipulations, whether LMs detect temporary ungrammaticality and use a linguistic cue such as plausibility, transitivity, and morphology is assessed. The results showed that both Transformers and LSTM exploited several linguistic cues for incremental syntactic processing, comparable to human syntactic processing. They differed, however, in terms of whether and how they use each linguistic cue. Overall, Transformers had a more human-like syntactic representation than LSTM, given their higher sensitivity to plausibility and ability to retain information from previous words. Meanwhile, the number of parameters does not seem to undermine the performance of LMs, contrary to what was predicted in previous studies. Through these findings, this research sought to contribute to a greater understanding of the syntactic processing of neural language models as well as human language processing.

Original languageEnglish
Pages (from-to)1033-1050
Number of pages18
JournalKorean Journal of English Language and Linguistics
Volume22
DOIs
StatePublished - 2022

Keywords

  • garden-path structure
  • natural language processing
  • psycholinguistics
  • targeted evaluation approach
  • transformers

Fingerprint

Dive into the research topics of '(AL)BERT Down the Garden Path: Psycholinguistic Experiments for Pre-trained Language Models'. Together they form a unique fingerprint.

Cite this