Get all the updates for this publication
Classification of Yoga Asanas from a Single Image by Learning the 3D View of Human Poses
In this chapter, we propose a technique for the classification of yoga poses/asanas by learning the 3D landmark points in human poses obtained from a single image. We apply an encoder architecture followed by a regression layer to estimate pose parameters like shape, gesture, and camera position, which are later mapped to 3D landmark points by the SMPL (Skinned Multi-Person Linear) model. The 3D landmark points of each image are the features used for the classification of poses. We experiment with different classification models, including k-nearest neighbors (kNN), support vector machine (SVM), and some popular deep neural networks such as AlexNet, VGGNet, and ResNet. Since this is the first attempt to classify yoga asanas, no dataset is available in the literature. We propose an annotated dataset containing images of yoga poses and validate the proposed method on the newly introduced dataset.
View more info for "Classification of Yoga Asanas from a Single Image by Learning the 3D View of Human Poses"