Investigating Grammatical Transfer in Korean-English GPT2 Language Models

Keonwoo Koo, Jaemin Lee, Myung Kwan Park

Research output: Contribution to journalArticlepeer-review

Abstract

With the recent success of artificial neural language model (LMs), their language acquisition has gained much attention (Futrell et al. 2019, Hu et al. 2020, Linzen et al. 2016, Warstadt et al. 2020, Wilcox et al. 2018). This paper delves into their second language (L2) acquisition, a largely unexplored area compared to their first language (L1) learning. The primary focus is on unraveling transfer effects originating from the L1’s linguistic structures. By closely examining our LMs’ performances on English grammar tasks, this study inspects how LMs encode abstract grammatical knowledge, particularly how pre-training biases acquired from Korean (L1) influence English (L2) performances in LMs. We present exploratory experiments where LMs were first trained on the dataset representing the initial language acquisition stage, followed by fine-tuning on the second language dataset. We analyzed cross-lingual transfer effects across diverse linguistic phenomena with the BLiMP test suite. We found that L1 pre-training did not accelerate linguistic generalization in the second language. Furthermore, our results revealed significant L1-interference, where the initial language knowledge hindered the LMs' ability to acquire and apply second language rules.

Original languageEnglish
Pages (from-to)568-588
Number of pages21
JournalKorean Journal of English Language and Linguistics
Volume24
DOIs
StatePublished - 2024

Keywords

  • GPT-2
  • L1-interference
  • neural language model
  • second language acquisition
  • transfer effects

Fingerprint

Dive into the research topics of 'Investigating Grammatical Transfer in Korean-English GPT2 Language Models'. Together they form a unique fingerprint.

Cite this