TY - JOUR
T1 - Ensemble-based deep reinforcement learning for chatbots
AU - Cuayáhuitl, Heriberto
AU - Lee, Donghyeon
AU - Ryu, Seonghan
AU - Cho, Yongjin
AU - Choi, Sungja
AU - Indurthi, Satish
AU - Yu, Seunghak
AU - Choi, Hyungtak
AU - Hwang, Inchul
AU - Kim, Jihie
N1 - Publisher Copyright:
© 2019 Elsevier B.V.
PY - 2019/11/13
Y1 - 2019/11/13
N2 - Trainable chatbots that exhibit fluent and human-like conversations remain a big challenge in artificial intelligence. Deep Reinforcement Learning (DRL) is promising for addressing this challenge, but its successful application remains an open question. This article describes a novel ensemble-based approach applied to value-based DRL chatbots, which use finite action sets as a form of meaning representation. In our approach, while dialogue actions are derived from sentence clustering, the training datasets in our ensemble are derived from dialogue clustering. The latter aim to induce specialised agents that learn to interact in a particular style. In order to facilitate neural chatbot training using our proposed approach, we assume dialogue data in raw text only – without any manually-labelled data. Experimental results using chitchat data reveal that (1) near human-like dialogue policies can be induced, (2) generalisation to unseen data is a difficult problem, and (3) training an ensemble of chatbot agents is essential for improved performance over using a single agent. In addition to evaluations using held-out data, our results are further supported by a human evaluation that rated dialogues in terms of fluency, engagingness and consistency – which revealed that our proposed dialogue rewards strongly correlate with human judgements.
AB - Trainable chatbots that exhibit fluent and human-like conversations remain a big challenge in artificial intelligence. Deep Reinforcement Learning (DRL) is promising for addressing this challenge, but its successful application remains an open question. This article describes a novel ensemble-based approach applied to value-based DRL chatbots, which use finite action sets as a form of meaning representation. In our approach, while dialogue actions are derived from sentence clustering, the training datasets in our ensemble are derived from dialogue clustering. The latter aim to induce specialised agents that learn to interact in a particular style. In order to facilitate neural chatbot training using our proposed approach, we assume dialogue data in raw text only – without any manually-labelled data. Experimental results using chitchat data reveal that (1) near human-like dialogue policies can be induced, (2) generalisation to unseen data is a difficult problem, and (3) training an ensemble of chatbot agents is essential for improved performance over using a single agent. In addition to evaluations using held-out data, our results are further supported by a human evaluation that rated dialogues in terms of fluency, engagingness and consistency – which revealed that our proposed dialogue rewards strongly correlate with human judgements.
KW - Deep supervised/unsupervised/reinforcement learning
KW - Neural chatbots
UR - http://www.scopus.com/inward/record.url?scp=85070188115&partnerID=8YFLogxK
U2 - 10.1016/j.neucom.2019.08.007
DO - 10.1016/j.neucom.2019.08.007
M3 - Article
AN - SCOPUS:85070188115
SN - 0925-2312
VL - 366
SP - 118
EP - 130
JO - Neurocomputing
JF - Neurocomputing
ER -