Considering Commonsense in Solving QA: Reading Comprehension with Semantic Search and Continual Learning

Seungwon Jeong, Dongsuk Oh, Kinam Park, Heuiseok Lim

Research output: Contribution to journalArticlepeer-review

Abstract

Unlike previous dialogue-based question-answering (QA) datasets, DREAM, multiple-choice Dialogue-based REAding comprehension exaMination dataset, requires a deep understanding of dialogue. Many problems require multi-sentence reasoning, whereas some require commonsense reasoning. However, most pre-trained language models (PTLMs) do not consider commonsense. In addition, because the maximum number of tokens that a language model (LM) can deal with is limited, the entire dialogue history cannot be included. The resulting information loss has an adverse effect on performance. To address these problems, we propose a Dialogue-based QA model with Common-sense Reasoning (DQACR), a language model that exploits Semantic Search and continual learning. We used Semantic Search to complement information loss from truncated dialogue. In addition, we used Semantic Search and continual learning to improve the PTLM’s commonsense reasoning. Our model achieves an improvement of approximately 1.5% over the baseline method and can thus facilitate QA-related tasks. It contributes toward not only dialogue-based QA tasks but also another form of QA datasets for future tasks.

Original languageEnglish
Article number4099
JournalApplied Sciences (Switzerland)
Volume12
Issue number9
DOIs
StatePublished - 1 May 2022

Keywords

  • commonsense reasoning
  • deep learning
  • dialogue-based multiple-choice QA
  • pre-trained language models
  • semantic search

Fingerprint

Dive into the research topics of 'Considering Commonsense in Solving QA: Reading Comprehension with Semantic Search and Continual Learning'. Together they form a unique fingerprint.

Cite this