Abstract
The commonsense question and answering (CSQA) system predicts the right answer based on a comprehensive understanding of the question. Previous research has developed models that use QA pairs, the corresponding evidence, or the knowledge graph as an input. Each method executes QA tasks with representations of pre-trained language models. However, the ability of the pre-trained language model to comprehend completely remains debatable. In this study, adversarial attack experiments were conducted on question-understanding. We examined the restrictions on the question-reasoning process of the pre-trained language model, and then demonstrated the need for models to use the logical structure of abstract meaning representations (AMRs). Additionally, the experimental results demonstrated that the method performed best when the AMR graph was extended with ConceptNet. With this extension, our proposed method outperformed the baseline in diverse commonsense-reasoning QA tasks.
Original language | English |
---|---|
Article number | 9202 |
Journal | Applied Sciences (Switzerland) |
Volume | 12 |
Issue number | 18 |
DOIs | |
State | Published - Sep 2022 |
Keywords
- abstract meaning representation
- commonsense question and answering
- commonsense reasoning
- ConceptNet
- pre-trained language model
- semantic representation
- sub-symbolic