Header menu link for other important links
X

Long-term Spatio-temporal Contrastive Learning framework for Skeleton Action Recognition

Anshul Rustogi,
Published in IEEE
2022
Abstract

Recent years have been witnessing significant developments in research in human action recognition based on skeleton data. The graphical representation of the human skeleton, available with the dataset, provides opportunity to apply Graph Convolutional Networks (GCN), to avail efficient analysis of deep spatial-temporal information from the joint and skeleton structure. Most of the current works in skeleton action recognition use the temporal aspect of the video in short-term sequences, ignoring the long-term information present in the evolving skeleton sequence. The proposed long-term Spatio-temporal Contrastive Learning framework for Skeleton Action Recognition uses an encoder-decoder module. The encoder collects deep global-level (long-term) information from the complete action sequence using efficient self-supervision. The proposed encoder combines knowledge from the temporal domain with high-level information of the relative joint and structure movements of the skeleton. The decoder serves two purposes: predicting the human activity and predicting skeleton structure in the future frames. The decoder primarily uses the high-level encodings from the encoder to anticipate the action. For predicting skeleton structure, we extract an even deeper correlation in the Spatio-temporal domain and merge it with the original frame of the video. We apply a contrastive framework in the frame prediction part so that similar actions have similar predicted skeleton structure. The use of the contrastive framework throughout the proposed model helps achieve exemplary performance while employing a self-supervised aspect to the model. We test our model on the NTU-RGB-D-60 dataset and achieve state-of-the-art performance. The codes related to this work are available at: \url{https://github.com/AnshulRustogi/Long-Term-Spatio-Temporal-Framework}.

About the journal
Published in IEEE
Open Access
no
Impact factor
N/A