There are 3 stages of our project:
POSE DETECTION: In this stage of the project, we confirm if the user is performing the pose or not using a deep learning model which is 95% accurate. The model treats pose detection as a classification problem and has two outputs i.e., the user is performing a selected pose and the user is not performing the selected pose. If the output is negative, the program does not proceed further and it has to start from the beginning.
POSE ESTIMATION: A computer vision technique that predicts and tracks the location of a person or object. This is done by looking at the orientation and combination of the pose. Pose estimation for our project is done by using MediaPipe which outputs a set of 33 keypoints on the user. This is typically done by identifying, locating, and tracking a number of key points on a given person, these key points represent important joints like an elbow or knee.
POSE CORRECTION : In this stage of the project we find the angles of the various joints involved by using keypoints and compare it with standard angles for the given pose. The accuracy before correction is calculated using the two i.e, deviation and standard angles. The area of correction is then identified and conveyed to the user and also corrections to be made to the pose is given quantitatively to the user. This system can be implemented on any embedded device or an android app to guide a person to go through the positions of a yoga exercise by estimating the position of limbs, compare it with a reference model and voice assistant tells the user about the correctness of the asana.