We use cookies to ensure that we give you the best experience on our website. By continuing to browse this repository, you give consent for essential cookies to be used. You can read more about our Privacy and Cookie Policy.

Durham Research Online
You are in:

CAM : a Combined Attention Model for natural language inference.

Gajbhiye, Amit and Jaf, Sardar and Al-Moubayed, Noura and Bradley, Steven and McGough, A. Stephen (2018) 'CAM : a Combined Attention Model for natural language inference.', in 2018 IEEE International Conference on Big Data (Big Data) ; proceedings. Piscataway, N.J.: IEEE, pp. 1009-1014.


Natural Language Inference (NLI) is a fundamental step towards natural language understanding. The task aims to detect whether a premise entails or contradicts a given hypothesis. NLI contributes to a wide range of natural language understanding applications such as question answering, text summarization and information extraction. Recently, the public availability of big datasets such as Stanford Natural Language Inference (SNLI) and SciTail, has made it feasible to train complex neural NLI models. Particularly, Bidirectional Long Short-Term Memory networks (BiLSTMs) with attention mechanisms have shown promising performance for NLI. In this paper, we propose a Combined Attention Model (CAM) for NLI. CAM combines the two attention mechanisms: intraattention and inter-attention. The model first captures the semantics of the individual input premise and hypothesis with intra-attention and then aligns the premise and hypothesis with inter-sentence attention. We evaluate CAM on two benchmark datasets: Stanford Natural Language Inference (SNLI) and SciTail, achieving 86.14% accuracy on SNLI and 77.23% on SciTail. Further, to investigate the effectiveness of individual attention mechanism and in combination with each other, we present an analysis showing that the intra- and inter-attention mechanisms achieve higher accuracy when they are combined together than when they are independently used.

Item Type:Book chapter
Full text:(AM) Accepted Manuscript
Download PDF
Publisher Web site:
Publisher statement:© 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Date accepted:16 October 2018
Date deposited:15 November 2018
Date of first online publication:10 December 2018
Date first made open access:No date available

Save or Share this output

Look up in GoogleScholar