Title: DstNet: deep spatial-temporal network for real-time action recognition and localisation in untrimmed video
Authors: Zhi Liu; Junting Li; Xian Wang
Addresses: School of Liangjiang AI, Chongqing University of Technology, Banan District, Chongqing, China ' School of Liangjiang AI, Chongqing University of Technology, Banan District, Chongqing, China ' School of Liangjiang AI, Chongqing University of Technology, Banan District, Chongqing, China
Abstract: Action recognition is a hot research direction of computer vision. How to deal with human action in untrimmed video in real time is a very significant challenge. It can be widely used in fields such as real-time monitoring. In this paper, we propose an end-to-end Deep Spatial-Temporal Network (DstNet) for action recognition and localisation. First of all, the untrimmed video is clipped into segments with fixed length. Then, the Convolutional 3 Dimension (C3D) network is used to extract highly dimensional features for each segment. Finally, the extracted feature sequences of several continual segments are input into Long Short-Term Memory (LSTM) network to find the intrinsic relationship among clipped segments to take action recognition and localisation simultaneously in the untrimmed video. While maintaining good accuracy, our network has the function of real-time video processing and has achieved good results in the standard evaluation performance of THUMOS14.
Keywords: action recognition; action localisation; LSTM; C3D; untrimmed video.
DOI: 10.1504/IJWMC.2022.127597
International Journal of Wireless and Mobile Computing, 2022 Vol.23 No.3/4, pp.310 - 317
Received: 10 Oct 2021
Received in revised form: 23 Dec 2021
Accepted: 02 Mar 2022
Published online: 12 Dec 2022 *