Cookies

We use cookies to ensure that we give you the best experience on our website. By continuing to browse this repository, you give consent for essential cookies to be used. You can read more about our Privacy and Cookie Policy.


Durham Research Online
You are in:

Recognising human-object interactions using attention-based LSTMs.

Almushyti, Muna and Li, Frederick W. B. (2019) 'Recognising human-object interactions using attention-based LSTMs.', in Computer Graphics and Visual Computing (CGVC). , pp. 135-139.

Abstract

Recognising Human-object interactions (HOIs) in videos is a challenge task especially when a human can interact with multiple objects. This paper attempts to solve the problem of HOIs by proposing a hierarchical framework that analyzes human-object interactions from a video sequence. The framework consists of LSTMs that firstly capture both human motion and temporal object information independently, followed by fusing these information through a bilinear layer to aggregate human-object features, which are then fed to a global deep LSTM to learn high-level information of HOIs. The proposed approach applies an attention mechanism to LSTMs in order to focus on important parts of human and object temporal information.

Item Type:Book chapter
Full text:Publisher-imposed embargo
(AM) Accepted Manuscript
File format - PDF
(882Kb)
Status:Peer-reviewed
Publisher Web site:https://doi.org/10.2312/cgvc.20191269
Date accepted:22 July 2019
Date deposited:30 October 2019
Date of first online publication:12 September 2019
Date first made open access:No date available

Save or Share this output

Export:
Export
Look up in GoogleScholar