TY - JOUR
T1 - Investigating Grammatical Transfer in Korean-English GPT2 Language Models
AU - Koo, Keonwoo
AU - Lee, Jaemin
AU - Park, Myung Kwan
N1 - Publisher Copyright:
© 2024 KASELL All rights reserved.
PY - 2024
Y1 - 2024
N2 - With the recent success of artificial neural language model (LMs), their language acquisition has gained much attention (Futrell et al. 2019, Hu et al. 2020, Linzen et al. 2016, Warstadt et al. 2020, Wilcox et al. 2018). This paper delves into their second language (L2) acquisition, a largely unexplored area compared to their first language (L1) learning. The primary focus is on unraveling transfer effects originating from the L1’s linguistic structures. By closely examining our LMs’ performances on English grammar tasks, this study inspects how LMs encode abstract grammatical knowledge, particularly how pre-training biases acquired from Korean (L1) influence English (L2) performances in LMs. We present exploratory experiments where LMs were first trained on the dataset representing the initial language acquisition stage, followed by fine-tuning on the second language dataset. We analyzed cross-lingual transfer effects across diverse linguistic phenomena with the BLiMP test suite. We found that L1 pre-training did not accelerate linguistic generalization in the second language. Furthermore, our results revealed significant L1-interference, where the initial language knowledge hindered the LMs' ability to acquire and apply second language rules.
AB - With the recent success of artificial neural language model (LMs), their language acquisition has gained much attention (Futrell et al. 2019, Hu et al. 2020, Linzen et al. 2016, Warstadt et al. 2020, Wilcox et al. 2018). This paper delves into their second language (L2) acquisition, a largely unexplored area compared to their first language (L1) learning. The primary focus is on unraveling transfer effects originating from the L1’s linguistic structures. By closely examining our LMs’ performances on English grammar tasks, this study inspects how LMs encode abstract grammatical knowledge, particularly how pre-training biases acquired from Korean (L1) influence English (L2) performances in LMs. We present exploratory experiments where LMs were first trained on the dataset representing the initial language acquisition stage, followed by fine-tuning on the second language dataset. We analyzed cross-lingual transfer effects across diverse linguistic phenomena with the BLiMP test suite. We found that L1 pre-training did not accelerate linguistic generalization in the second language. Furthermore, our results revealed significant L1-interference, where the initial language knowledge hindered the LMs' ability to acquire and apply second language rules.
KW - GPT-2
KW - L1-interference
KW - neural language model
KW - second language acquisition
KW - transfer effects
UR - http://www.scopus.com/inward/record.url?scp=85197383955&partnerID=8YFLogxK
U2 - 10.15738/kjell.24..202406.568
DO - 10.15738/kjell.24..202406.568
M3 - Article
AN - SCOPUS:85197383955
SN - 1598-1398
VL - 24
SP - 568
EP - 588
JO - Korean Journal of English Language and Linguistics
JF - Korean Journal of English Language and Linguistics
ER -