TY - JOUR
T1 - Deep learning can contrast the minimal pairs of syntactic data
AU - Park, Kwonsik
AU - Park, Myung Kwan
AU - Song, Sanghoun
N1 - Publisher Copyright:
© 2021. All rights reserved.
PY - 2021/6
Y1 - 2021/6
N2 - Park, Kwonsik, Myung-Kwan Park, and Sanghoun Song. 2021. Deep learning can contrast the minimal pairs of syntactic data. Linguistic Research 38(2): 395–424. The present work aims to assess the feasibility of using deep learning as a useful tool to investigate syntactic phenomena. To this end, the present study concerns three research questions: (i) whether deep learning can detect syntactically inappropriate constructions, (ii) whether deep learning’s acceptability judgments are accountable, and (iii) whether deep learning’s aspects of acceptability judgments are similar to human judgments. As a proxy for a deep learning language model, this study chooses BERT. The current paper comprises syntactically contrasted pairs of English sentences which come from the three test suites already available. The first one is 196 grammatical –ungrammatical minimal pairs from DeKeyser (2000). The second one is examples in four published syntax textbooks excerpted from Warstadt et al. (2019). The last one is extracted from Sprouse et al. (2013), which collects the examples reported in a theoretical linguistics journal, Linguistic Inquiry. The BERT models, base BERT and large BERT, are assessed by judging acceptability of items in the test suites with an evaluation metric, surprisal, which is used to measure how ‘surprised’ a model is when encountering a word in a sequence of words, i.e., a sentence. The results are analyzed in the two frameworks: directionality and repulsion. The results of directionality reveals that the two versions of BERT are overall competent at distinguishing ungrammatical sentences from grammatical ones. The statistical results of both repulsion and directionality also reveal that the two variants of BERT do not differ significantly. Regarding repulsion, correct judgments and incorrect ones are significantly different. Additionally, the repulsion of the first test suite, which is excerpted from the items for testing learners’ grammaticality judgments, is higher than the other test suites, which are excerpted from the syntax textbooks and published literature. This study compares BERT’s acceptability judgments with magnitude estimation results reported in Sprouse et al. (2013) in order to examine if deep learning’s syntactic knowledge is akin to human knowledge. The error analyses on incorrectly judged items reveal that there are some syntactic constructions that the two BERTs have trouble learning, which indicates that BERT’s acceptability judgments are distributed not randomly.
AB - Park, Kwonsik, Myung-Kwan Park, and Sanghoun Song. 2021. Deep learning can contrast the minimal pairs of syntactic data. Linguistic Research 38(2): 395–424. The present work aims to assess the feasibility of using deep learning as a useful tool to investigate syntactic phenomena. To this end, the present study concerns three research questions: (i) whether deep learning can detect syntactically inappropriate constructions, (ii) whether deep learning’s acceptability judgments are accountable, and (iii) whether deep learning’s aspects of acceptability judgments are similar to human judgments. As a proxy for a deep learning language model, this study chooses BERT. The current paper comprises syntactically contrasted pairs of English sentences which come from the three test suites already available. The first one is 196 grammatical –ungrammatical minimal pairs from DeKeyser (2000). The second one is examples in four published syntax textbooks excerpted from Warstadt et al. (2019). The last one is extracted from Sprouse et al. (2013), which collects the examples reported in a theoretical linguistics journal, Linguistic Inquiry. The BERT models, base BERT and large BERT, are assessed by judging acceptability of items in the test suites with an evaluation metric, surprisal, which is used to measure how ‘surprised’ a model is when encountering a word in a sequence of words, i.e., a sentence. The results are analyzed in the two frameworks: directionality and repulsion. The results of directionality reveals that the two versions of BERT are overall competent at distinguishing ungrammatical sentences from grammatical ones. The statistical results of both repulsion and directionality also reveal that the two variants of BERT do not differ significantly. Regarding repulsion, correct judgments and incorrect ones are significantly different. Additionally, the repulsion of the first test suite, which is excerpted from the items for testing learners’ grammaticality judgments, is higher than the other test suites, which are excerpted from the syntax textbooks and published literature. This study compares BERT’s acceptability judgments with magnitude estimation results reported in Sprouse et al. (2013) in order to examine if deep learning’s syntactic knowledge is akin to human knowledge. The error analyses on incorrectly judged items reveal that there are some syntactic constructions that the two BERTs have trouble learning, which indicates that BERT’s acceptability judgments are distributed not randomly.
KW - BERT
KW - contrast
KW - deep learning
KW - minimal pair
KW - syntactic judgment
UR - http://www.scopus.com/inward/record.url?scp=85110941008&partnerID=8YFLogxK
U2 - 10.17250/khisli.38.2.202106.008
DO - 10.17250/khisli.38.2.202106.008
M3 - Article
AN - SCOPUS:85110941008
SN - 1229-1374
VL - 38
SP - 395
EP - 424
JO - Linguistic Research
JF - Linguistic Research
IS - 2
ER -