Self-learning architecture for natural language generation

Hyungtak Choi, K. M. Siddarth, Haehun Yang, Heesik Jeon, Inchul Hwang, Jihie Kim

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

3 Scopus citations

Abstract

In this paper, we propose a self-learning architecture for generating natural language templates for conversational assistants. Generating templates to cover all the combinations of slots in an intent is time consuming and labor-intensive. We examine three different models based on our proposed architecture - Rule-based model, Sequence-to-Sequence (Seq2Seq) model and Semantically Conditioned LSTM (SC-LSTM) model for the IoT domain - to reduce the human labor required for template generation. We demonstrate the feasibility of template generation for the IoT domain using our self-learning architecture. In both automatic and human evaluation, the self-learning architecture outperforms previous works trained with a fully human-labeled dataset. This is promising for commercial conversational assistant solutions.

Original languageEnglish
Title of host publicationINLG 2018 - 11th International Natural Language Generation Conference, Proceedings of the Conference
PublisherAssociation for Computational Linguistics (ACL)
Pages165-170
Number of pages6
ISBN (Electronic)9781948087865
StatePublished - 2018
Event11th International Natural Language Generation Conference, INLG 2018 - Tilburg, Netherlands
Duration: 5 Nov 20188 Nov 2018

Publication series

NameINLG 2018 - 11th International Natural Language Generation Conference, Proceedings of the Conference

Conference

Conference11th International Natural Language Generation Conference, INLG 2018
Country/TerritoryNetherlands
CityTilburg
Period5/11/188/11/18

Fingerprint

Dive into the research topics of 'Self-learning architecture for natural language generation'. Together they form a unique fingerprint.

Cite this