Abstract
Multi-premise natural language inference provides important technical support for automatic question answering, machine reading comprehension and other application fields. Existing approaches for Multiple Premises Entailment (MPE) task are to convert MPE data into Single Premise Entailment (SPE) data format, then MPE is handled in the same way as SPE. This process ignores the unique characteristics of multi-premise, which will result in loss of semantics. This paper proposes a mechanism based on Attention and Gate Fusion Network (AGNet). AGNet adopts a “Local Matching-Integration” strategy to consider the characteristics of multi-premise. In this process, an attention mechanism combined with a matching gate mechanism can fully describe the relationship between the premise and hypothesis. A self-attention mechanism and a fusion gate mechanism can deeply exploit the relationship from the multi-premise. In order to avoid over-fitting problem, we propose a pre-training method for our model. In terms of computational complexity, AGNet has good parallelism, reduces the time complexity to O(1) in the process of matching. The experiments show that our model has achieved new state-of-the-art results on MPE test set.
Original language | English |
---|---|
Article number | 113214 |
Journal | Expert Systems with Applications |
Volume | 147 |
DOIs | |
Publication status | Published - 1 Jun 2020 |
Scopus Subject Areas
- General Engineering
- Computer Science Applications
- Artificial Intelligence
User-Defined Keywords
- Attention mechanism
- Fine-tune
- Gate mechanism
- Multiple premise entailment
- Natural language inference