3D Pose Based Feedback for Physical Exercises
Ziyi Zhao1,Sena Kiciroglu1,Hugues Vinzant2,Yuan Cheng2,
Isinsu Katircioglu2, Mathieu Salzmann1,3, Pascal Fua1
1 CVLab, EPFL, Switzerland
2 Former students of CVLab, EPFL, Switzerland
3 Clearspace, Switzerland
Abstract
Unsupervised self-rehabilitation exercises and physical training can cause serious injuries if performed incorrectly. We introduce a learning-based framework that identifies the mistakes made by a user and proposes corrective measures for easier and safer individual training. Our framework does not rely on hard-coded, heuristic rules. Instead, it learns them from data, which facilitates its adaptation to specific user needs. To this end, we use a Graph Convolutional Network (GCN) architecture acting on the user's pose sequence to model the relationship between the body joints trajectories. To evaluate our approach, we introduce a dataset with 3 different physical exercises. Our approach yields 90.9% mistake identification accuracy and successfully corrects 94.2% of the mistakes.Exercise Feedback
Framework
Our framework for providing exercise feedback relies on GCNs which can learn to exploit the relationships between the trajectories of individual joints. The overall model consists of two branches: the classification branch which predicts whether the input motion is correct or incorrect, specifying the mistake being made in the latter case, and the correction branch that outputs a corrected 3D pose sequence, providing a detailed feedback to the user.
We feed the predicted action labels coming from the classification branch to the correction branch, which is called the “feedback module”. It allows us to explicitly provide label information to the correction module, enabling us to further improve the accuracy of the corrected motion.
EC3D Dataset
To showcase our framework’s performance, we recorded a physical exercise dataset with 3D poses and instruction label annotations. Our dataset features 3 types of exercises; squats, lunges and planks. Each exercise type is performed correctly and with mistakes following specific instructions by 4 different subjects. We use the EC3D dataset to evaluate our model performance both quantitatively and qualitatively.BibTeX
If you find our work useful, please cite it as:
@inproceedings{zhao2022exercise,
author = {Zhao, Ziyi and Kiciroglu, Sena and Vinzant, Hugues and Cheng, Yuan and Katircioglu, Isinsu and Salzmann, Mathieu and Fua, Pascal},
booktitle = {ACCV},
title = {3D Pose Based Feedback for Physical Exercises},
year = {2022}
}