Decoding BERT’s Internal Processing of Garden-Path Structures through Attention Maps*

Jonghyun Lee, Jeong Ah Shin

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

Recent advancements in deep learning neural models, such as BERT, have demonstrated remarkable performance in natural language processing tasks, yet understanding their internal processing remains a challenge. This study employs the method of examining attention maps to uncover the internal processing of BERT, specifically when dealing with garden-path sentences. The analysis focuses on BERT's utilization of linguistic cues, such as transitivity, plausibility, and the presence of a comma, and evaluates its capacity for reanalyzing misinterpretations. The results revealed that BERT exhibits human-like syntactic processing by attending to the presence of a comma, showing sensitivity to transitivity, and reanalyzing misinterpretations, despite initially lacking sensitivity to plausibility. By concentrating on attention maps, the present study provides valuable insights into the inner workings of BERT and contributes to a deeper understanding of how advanced neural language models acquire and process complex linguistic structures.

Original languageEnglish
Pages (from-to)461-481
Number of pages21
JournalKorean Journal of English Language and Linguistics
Volume23
DOIs
StatePublished - 2023

Keywords

  • attention map
  • garden-path structure
  • Natural Language Processing
  • Psycholinguistics
  • Transformers

Fingerprint

Dive into the research topics of 'Decoding BERT’s Internal Processing of Garden-Path Structures through Attention Maps*'. Together they form a unique fingerprint.

Cite this