DstNet: deep spatial-temporal network for real-time action recognition and localisation in untrimmed video Online publication date: Mon, 12-Dec-2022
by Zhi Liu; Junting Li; Xian Wang
International Journal of Wireless and Mobile Computing (IJWMC), Vol. 23, No. 3/4, 2022
Abstract: Action recognition is a hot research direction of computer vision. How to deal with human action in untrimmed video in real time is a very significant challenge. It can be widely used in fields such as real-time monitoring. In this paper, we propose an end-to-end Deep Spatial-Temporal Network (DstNet) for action recognition and localisation. First of all, the untrimmed video is clipped into segments with fixed length. Then, the Convolutional 3 Dimension (C3D) network is used to extract highly dimensional features for each segment. Finally, the extracted feature sequences of several continual segments are input into Long Short-Term Memory (LSTM) network to find the intrinsic relationship among clipped segments to take action recognition and localisation simultaneously in the untrimmed video. While maintaining good accuracy, our network has the function of real-time video processing and has achieved good results in the standard evaluation performance of THUMOS14.
Existing subscribers:
Go to Inderscience Online Journals to access the Full Text of this article.
If you are not a subscriber and you just want to read the full contents of this article, buy online access here.Complimentary Subscribers, Editors or Members of the Editorial Board of the International Journal of Wireless and Mobile Computing (IJWMC):
Login with your Inderscience username and password:
Want to subscribe?
A subscription gives you complete access to all articles in the current issue, as well as to all articles in the previous three years (where applicable). See our Orders page to subscribe.
If you still need assistance, please email subs@inderscience.com