Human Pose Classification with MoveNet and TensorFlow Lite

This notebook teaches you how to train a pose classification model using MoveNet and TensorFlow Lite. The result is a new TensorFlow Lite model that accepts the output from the MoveNet model as its input, and outputs a pose classification, such as the name of a yoga pose.

The procedure in this notebook consists of 3 parts:

  • Part 1: Preprocess the pose classification training data into a CSV file that specifies the landmarks (body keypoints) detected by the MoveNet model, along with the ground truth pose labels.
  • Part 2: Build and train a pose classification model that takes the landmark coordinates from the CSV file as input, and outputs the predicted labels.
  • Part 3: Convert the pose classification model to TFLite.

By default, this notebook uses an image dataset with labeled yoga poses, but we've also included a section in Part 1 where you can upload your own image dataset of poses.

View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook See TF Hub model

Preparation

In this section, you'll import the necessary libraries and define several functions to preprocess the training images into a CSV file that contains the landmark coordinates and ground truth labels.

Nothing observable happens here, but you can expand the hidden code cells to see the implementation for some of the functions we'll be calling later on.

If you only want to create the CSV file without knowing all the details, just run this section and proceed to Part 1.

pip install -q opencv-python
import csv
import cv2
import itertools
import numpy as np
import pandas as pd
import os
import sys
import tempfile
import tqdm

from matplotlib import pyplot as plt
from matplotlib.collections import LineCollection

import tensorflow as tf
import tensorflow_hub as hub
from tensorflow import keras

from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, classification_report, confusion_matrix
2022-08-09 17:02:37.353756: E tensorflow/stream_executor/cuda/cuda_blas.cc:2981] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2022-08-09 17:02:38.127419: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvrtc.so.11.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/cv2/../../lib64:
2022-08-09 17:02:38.134209: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvrtc.so.11.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/cv2/../../lib64:
2022-08-09 17:02:38.134225: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.

Code to run pose estimation using MoveNet

Functions to run pose estimation with MoveNet

Cloning into 'examples'...
remote: Enumerating objects: 21931, done.[K
remote: Counting objects: 100% (591/591), done.[K
remote: Compressing objects: 100% (350/350), done.[K
remote: Total 21931 (delta 231), reused 452 (delta 163), pack-reused 21340[K
Receiving objects: 100% (21931/21931), 37.45 MiB | 44.65 MiB/s, done.
Resolving deltas: 100% (11989/11989), done.
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.

Functions to visualize the pose estimation results.

Code to load the images, detect pose landmarks and save them into a CSV file

(Optional) Code snippet to try out the Movenet pose estimation logic

--2022-08-09 17:02:41--  https://cdn.pixabay.com/photo/2017/03/03/17/30/yoga-2114512_960_720.jpg
Resolving cdn.pixabay.com (cdn.pixabay.com)... 104.18.37.244, 172.64.150.12, 2606:4700:4400::ac40:960c, ...
Connecting to cdn.pixabay.com (cdn.pixabay.com)|104.18.37.244|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 17685 (17K) [image/jpeg]
Saving to: ‘/tmp/image.jpeg’

/tmp/image.jpeg     100%[===================>]  17.27K  --.-KB/s    in 0s      

2022-08-09 17:02:42 (87.5 MB/s) - ‘/tmp/image.jpeg’ saved [17685/17685]

png

Part 1: Preprocess the input images

Because the input for our pose classifier is the output landmarks from the MoveNet model, we need to generate our training dataset by running labeled images through MoveNet and then capturing all the landmark data and ground truth labels into a CSV file.

The dataset we've provided for this tutorial is a CG-generated yoga pose dataset. It contains images of multiple CG-generated models doing 5 different yoga poses. The directory is already split into a train dataset and a test dataset.

So in this section, we'll download the yoga dataset and run it through MoveNet so we can capture all the landmarks into a CSV file... However, it takes about 15 minutes to feed our yoga dataset to MoveNet and generate this CSV file. So as an alternative, you can download a pre-existing CSV file for the yoga dataset by setting is_skip_step_1 parameter below to True. That way, you'll skip this step and instead download the same CSV file that will be created in this preprocessing step.

On the other hand, if you want to train the pose classifier with your own image dataset, you need to upload your images and run this preprocessing step (leave is_skip_step_1 False)—follow the instructions below to upload your own pose dataset.

(Optional) Upload your own pose dataset

If you want to train the pose classifier with your own labeled poses (they can be any poses, not just yoga poses), follow these steps:

  1. Set the above use_custom_dataset option to True.

  2. Prepare an archive file (ZIP, TAR, or other) that includes a folder with your images dataset. The folder must include sorted images of your poses as follows.

    If you've already split your dataset into train and test sets, then set dataset_is_split to True. That is, your images folder must include "train" and "test" directories like this:

    yoga_poses/
    |__ train/
        |__ downdog/
            |______ 00000128.jpg
            |______ ...
    |__ test/
        |__ downdog/
            |______ 00000181.jpg
            |______ ...
    

    Or, if your dataset is NOT split yet, then set dataset_is_split to False and we'll split it up based on a specified split fraction. That is, your uploaded images folder should look like this:

    yoga_poses/
    |__ downdog/
        |______ 00000128.jpg
        |______ 00000181.jpg
        |______ ...
    |__ goddess/
        |______ 00000243.jpg
        |______ 00000306.jpg
        |______ ...
    
  3. Click the Files tab on the left (folder icon) and then click Upload to session storage (file icon).

  4. Select your archive file and wait until it finishes uploading before you proceed.

  5. Edit the following code block to specify the name of your archive file and images directory. (By default, we expect a ZIP file, so you'll need to also modify that part if your archive is another format.)

  6. Now run the rest of the notebook.

if use_custom_dataset:
  # ATTENTION:
  # You must edit these two lines to match your archive and images folder name:
  # !tar -xf YOUR_DATASET_ARCHIVE_NAME.tar
  !unzip -q YOUR_DATASET_ARCHIVE_NAME.zip
  dataset_in = 'YOUR_DATASET_DIR_NAME'

  # You can leave the rest alone:
  if not os.path.isdir(dataset_in):
    raise Exception("dataset_in is not a valid directory")
  if dataset_is_split:
    IMAGES_ROOT = dataset_in
  else:
    dataset_out = 'split_' + dataset_in
    split_into_train_test(dataset_in, dataset_out, test_split=0.2)
    IMAGES_ROOT = dataset_out

Download the yoga dataset

if not is_skip_step_1 and not use_custom_dataset:
  !wget -O yoga_poses.zip http://download.tensorflow.org/data/pose_classification/yoga_poses.zip
  !unzip -q yoga_poses.zip -d yoga_cg
  IMAGES_ROOT = "yoga_cg"
--2022-08-09 17:02:46--  http://download.tensorflow.org/data/pose_classification/yoga_poses.zip
Resolving download.tensorflow.org (download.tensorflow.org)... 172.217.219.128, 2607:f8b0:4001:c13::80
Connecting to download.tensorflow.org (download.tensorflow.org)|172.217.219.128|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 102517581 (98M) [application/zip]
Saving to: ‘yoga_poses.zip’

yoga_poses.zip      100%[===================>]  97.77M   198MB/s    in 0.5s    

2022-08-09 17:02:46 (198 MB/s) - ‘yoga_poses.zip’ saved [102517581/102517581]

Preprocess the TRAIN dataset

if not is_skip_step_1:
  images_in_train_folder = os.path.join(IMAGES_ROOT, 'train')
  images_out_train_folder = 'poses_images_out_train'
  csvs_out_train_path = 'train_data.csv'

  preprocessor = MoveNetPreprocessor(
      images_in_folder=images_in_train_folder,
      images_out_folder=images_out_train_folder,
      csvs_out_path=csvs_out_train_path,
  )

  preprocessor.process(per_pose_class_limit=None)
Preprocessing chair
  0%|          | 0/200 [00:00<?, ?it/s]/tmpfs/tmp/ipykernel_9196/1291585794.py:128: DeprecationWarning: `np.str` is a deprecated alias for the builtin `str`. To silence this warning, use `str` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.str_` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
  coordinates = pose_landmarks.flatten().astype(np.str).tolist()
100%|██████████| 200/200 [00:19<00:00, 10.43it/s]
Preprocessing cobra
100%|██████████| 200/200 [00:18<00:00, 10.92it/s]
Preprocessing dog
100%|██████████| 200/200 [00:18<00:00, 10.84it/s]
Preprocessing tree
100%|██████████| 200/200 [00:19<00:00, 10.28it/s]
Preprocessing warrior
100%|██████████| 200/200 [00:16<00:00, 11.87it/s]
Skipped yoga_cg/train/chair/girl3_chair091.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair092.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair093.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair094.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair096.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair097.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair099.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair100.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair104.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair106.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair110.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair114.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair115.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair118.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair122.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair123.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair124.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair125.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair131.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair132.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair133.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair134.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair136.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair138.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair139.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair142.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/guy2_chair089.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/guy2_chair136.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/guy2_chair140.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/guy2_chair143.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/guy2_chair144.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/guy2_chair145.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/guy2_chair146.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra026.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra029.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra030.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra038.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra040.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra041.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra048.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra050.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra051.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra055.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra059.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra061.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra068.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra070.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra081.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra087.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra088.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra089.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra090.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra091.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra092.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra093.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra094.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra096.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra099.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra102.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra110.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra112.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra115.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra119.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra122.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra128.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra129.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra136.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra140.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl2_cobra029.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl2_cobra046.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl2_cobra050.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl2_cobra053.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl2_cobra108.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl2_cobra117.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl2_cobra129.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl2_cobra133.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl2_cobra136.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl2_cobra140.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra028.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra030.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra032.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra039.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra040.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra051.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra052.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra058.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra062.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra068.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra072.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra076.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra078.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra079.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra082.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra088.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra092.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra097.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra099.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra107.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra129.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra130.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra132.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra134.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra138.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra034.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra042.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra043.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra047.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra053.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra065.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra077.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra078.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra080.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra081.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra084.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra088.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra089.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra102.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra105.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra108.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra139.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl1_dog027.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl1_dog028.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl1_dog030.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl1_dog032.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog075.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog080.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog083.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog085.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog087.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog088.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog090.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog091.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog093.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog094.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog095.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog099.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog100.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog101.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog103.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog104.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog105.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog107.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog111.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog025.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog026.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog027.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog028.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog031.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog033.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog035.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog037.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog040.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog041.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog047.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog052.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog062.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog072.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog074.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog075.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog077.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog081.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog082.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog086.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog088.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog090.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog092.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog094.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog095.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog096.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog100.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog102.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog103.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog104.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog106.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog107.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog111.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/guy1_dog070.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/guy1_dog076.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/guy2_dog070.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/guy2_dog071.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/guy2_dog082.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/girl2_tree119.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/girl2_tree122.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/girl2_tree161.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/girl2_tree163.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/guy1_tree139.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/guy1_tree140.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/guy1_tree141.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/guy1_tree143.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/guy2_tree085.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/guy2_tree086.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/guy2_tree087.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/guy2_tree088.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/guy2_tree090.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/guy2_tree092.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/guy2_tree145.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/guy2_tree147.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior049.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior053.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior064.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior066.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior067.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior072.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior075.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior077.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior080.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior083.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior084.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior087.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior089.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior093.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior094.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior095.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior098.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior099.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior100.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior103.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior108.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior109.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior111.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior112.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior113.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior114.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior116.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior117.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior047.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior049.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior050.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior052.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior057.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior058.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior063.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior068.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior079.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior083.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior085.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior088.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior092.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior096.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior097.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior102.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior106.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior108.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior042.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior043.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior047.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior049.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior051.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior054.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior056.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior057.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior061.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior066.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior067.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior073.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior074.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior075.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior079.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior087.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior089.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior090.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior091.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior092.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior094.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior095.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior096.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior100.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior103.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior107.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior115.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior117.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior134.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior140.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior143.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior043.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior048.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior051.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior052.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior055.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior057.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior062.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior068.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior069.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior073.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior076.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior077.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior080.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior081.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior082.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior088.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior091.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior092.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior093.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior094.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior097.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior118.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior120.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior121.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior124.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior125.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior126.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior131.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior134.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior135.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior138.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior143.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior145.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior148.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy2_warrior051.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy2_warrior086.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy2_warrior111.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy2_warrior118.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy2_warrior122.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy2_warrior129.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy2_warrior131.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy2_warrior135.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy2_warrior137.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy2_warrior139.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy2_warrior145.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy2_warrior148.jpg. No pose was confidentlly detected.

Preprocess the TEST dataset

if not is_skip_step_1:
  images_in_test_folder = os.path.join(IMAGES_ROOT, 'test')
  images_out_test_folder = 'poses_images_out_test'
  csvs_out_test_path = 'test_data.csv'

  preprocessor = MoveNetPreprocessor(
      images_in_folder=images_in_test_folder,
      images_out_folder=images_out_test_folder,
      csvs_out_path=csvs_out_test_path,
  )

  preprocessor.process(per_pose_class_limit=None)
Preprocessing chair
  0%|          | 0/84 [00:00<?, ?it/s]/tmpfs/tmp/ipykernel_9196/1291585794.py:128: DeprecationWarning: `np.str` is a deprecated alias for the builtin `str`. To silence this warning, use `str` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.str_` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
  coordinates = pose_landmarks.flatten().astype(np.str).tolist()
100%|██████████| 84/84 [00:09<00:00,  8.63it/s]
Preprocessing cobra
100%|██████████| 116/116 [00:10<00:00, 10.63it/s]
Preprocessing dog
100%|██████████| 90/90 [00:08<00:00, 10.21it/s]
Preprocessing tree
100%|██████████| 96/96 [00:09<00:00, 10.04it/s]
Preprocessing warrior
100%|██████████| 109/109 [00:10<00:00, 10.66it/s]
Skipped yoga_cg/test/cobra/guy3_cobra048.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra050.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra051.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra052.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra053.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra054.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra055.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra056.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra057.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra058.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra059.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra060.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra062.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra069.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra075.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra077.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra081.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra124.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra131.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra132.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra134.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra135.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra136.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/dog/guy3_dog025.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/dog/guy3_dog026.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/dog/guy3_dog036.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/dog/guy3_dog042.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/dog/guy3_dog106.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/dog/guy3_dog108.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior042.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior043.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior044.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior045.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior046.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior047.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior048.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior050.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior051.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior052.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior053.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior054.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior055.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior056.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior059.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior060.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior062.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior063.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior065.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior066.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior068.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior070.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior071.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior072.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior073.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior074.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior075.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior076.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior077.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior079.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior080.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior081.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior082.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior083.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior084.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior085.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior086.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior087.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior088.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior089.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior137.jpg. No pose was confidentlly detected.

Part 2: Train a pose classification model that takes the landmark coordinates as input, and output the predicted labels.

You'll build a TensorFlow model that takes the landmark coordinates and predicts the pose class that the person in the input image performs. The model consists of two submodels:

  • Submodel 1 calculates a pose embedding (a.k.a feature vector) from the detected landmark coordinates.
  • Submodel 2 feeds pose embedding through several Dense layer to predict the pose class.

You'll then train the model based on the dataset that were preprocessed in part 1.

(Optional) Download the preprocessed dataset if you didn't run part 1

# Download the preprocessed CSV files which are the same as the output of step 1
if is_skip_step_1:
  !wget -O train_data.csv http://download.tensorflow.org/data/pose_classification/yoga_train_data.csv
  !wget -O test_data.csv http://download.tensorflow.org/data/pose_classification/yoga_test_data.csv

  csvs_out_train_path = 'train_data.csv'
  csvs_out_test_path = 'test_data.csv'
  is_skipped_step_1 = True

Load the preprocessed CSVs into TRAIN and TEST datasets.

def load_pose_landmarks(csv_path):
  """Loads a CSV created by MoveNetPreprocessor.

  Returns:
    X: Detected landmark coordinates and scores of shape (N, 17 * 3)
    y: Ground truth labels of shape (N, label_count)
    classes: The list of all class names found in the dataset
    dataframe: The CSV loaded as a Pandas dataframe features (X) and ground
      truth labels (y) to use later to train a pose classification model.
  """

  # Load the CSV file
  dataframe = pd.read_csv(csv_path)
  df_to_process = dataframe.copy()

  # Drop the file_name columns as you don't need it during training.
  df_to_process.drop(columns=['file_name'], inplace=True)

  # Extract the list of class names
  classes = df_to_process.pop('class_name').unique()

  # Extract the labels
  y = df_to_process.pop('class_no')

  # Convert the input features and labels into the correct format for training.
  X = df_to_process.astype('float64')
  y = keras.utils.to_categorical(y)

  return X, y, classes, dataframe

Load and split the original TRAIN dataset into TRAIN (85% of the data) and VALIDATE (the remaining 15%).

# Load the train data
X, y, class_names, _ = load_pose_landmarks(csvs_out_train_path)

# Split training data (X, y) into (X_train, y_train) and (X_val, y_val)
X_train, X_val, y_train, y_val = train_test_split(X, y,
                                                  test_size=0.15)
# Load the test data
X_test, y_test, _, df_test = load_pose_landmarks(csvs_out_test_path)

Define functions to convert the pose landmarks to a pose embedding (a.k.a. feature vector) for pose classification

Next, convert the landmark coordinates to a feature vector by:

  1. Moving the pose center to the origin.
  2. Scaling the pose so that the pose size becomes 1
  3. Flattening these coordinates into a feature vector

Then use this feature vector to train a neural-network based pose classifier.

def get_center_point(landmarks, left_bodypart, right_bodypart):
  """Calculates the center point of the two given landmarks."""

  left = tf.gather(landmarks, left_bodypart.value, axis=1)
  right = tf.gather(landmarks, right_bodypart.value, axis=1)
  center = left * 0.5 + right * 0.5
  return center


def get_pose_size(landmarks, torso_size_multiplier=2.5):
  """Calculates pose size.

  It is the maximum of two values:
    * Torso size multiplied by `torso_size_multiplier`
    * Maximum distance from pose center to any pose landmark
  """
  # Hips center
  hips_center = get_center_point(landmarks, BodyPart.LEFT_HIP, 
                                 BodyPart.RIGHT_HIP)

  # Shoulders center
  shoulders_center = get_center_point(landmarks, BodyPart.LEFT_SHOULDER,
                                      BodyPart.RIGHT_SHOULDER)

  # Torso size as the minimum body size
  torso_size = tf.linalg.norm(shoulders_center - hips_center)

  # Pose center
  pose_center_new = get_center_point(landmarks, BodyPart.LEFT_HIP, 
                                     BodyPart.RIGHT_HIP)
  pose_center_new = tf.expand_dims(pose_center_new, axis=1)
  # Broadcast the pose center to the same size as the landmark vector to
  # perform substraction
  pose_center_new = tf.broadcast_to(pose_center_new,
                                    [tf.size(landmarks) // (17*2), 17, 2])

  # Dist to pose center
  d = tf.gather(landmarks - pose_center_new, 0, axis=0,
                name="dist_to_pose_center")
  # Max dist to pose center
  max_dist = tf.reduce_max(tf.linalg.norm(d, axis=0))

  # Normalize scale
  pose_size = tf.maximum(torso_size * torso_size_multiplier, max_dist)

  return pose_size


def normalize_pose_landmarks(landmarks):
  """Normalizes the landmarks translation by moving the pose center to (0,0) and
  scaling it to a constant pose size.
  """
  # Move landmarks so that the pose center becomes (0,0)
  pose_center = get_center_point(landmarks, BodyPart.LEFT_HIP, 
                                 BodyPart.RIGHT_HIP)
  pose_center = tf.expand_dims(pose_center, axis=1)
  # Broadcast the pose center to the same size as the landmark vector to perform
  # substraction
  pose_center = tf.broadcast_to(pose_center, 
                                [tf.size(landmarks) // (17*2), 17, 2])
  landmarks = landmarks - pose_center

  # Scale the landmarks to a constant pose size
  pose_size = get_pose_size(landmarks)
  landmarks /= pose_size

  return landmarks


def landmarks_to_embedding(landmarks_and_scores):
  """Converts the input landmarks into a pose embedding."""
  # Reshape the flat input into a matrix with shape=(17, 3)
  reshaped_inputs = keras.layers.Reshape((17, 3))(landmarks_and_scores)

  # Normalize landmarks 2D
  landmarks = normalize_pose_landmarks(reshaped_inputs[:, :, :2])

  # Flatten the normalized landmark coordinates into a vector
  embedding = keras.layers.Flatten()(landmarks)

  return embedding

Define a Keras model for pose classification

Our Keras model takes the detected pose landmarks, then calculates the pose embedding and predicts the pose class.

# Define the model
inputs = tf.keras.Input(shape=(51))
embedding = landmarks_to_embedding(inputs)

layer = keras.layers.Dense(128, activation=tf.nn.relu6)(embedding)
layer = keras.layers.Dropout(0.5)(layer)
layer = keras.layers.Dense(64, activation=tf.nn.relu6)(layer)
layer = keras.layers.Dropout(0.5)(layer)
outputs = keras.layers.Dense(len(class_names), activation="softmax")(layer)

model = keras.Model(inputs, outputs)
model.summary()
Model: "model"
__________________________________________________________________________________________________
 Layer (type)                   Output Shape         Param #     Connected to                     
==================================================================================================
 input_1 (InputLayer)           [(None, 51)]         0           []                               
                                                                                                  
 reshape (Reshape)              (None, 17, 3)        0           ['input_1[0][0]']                
                                                                                                  
 tf.__operators__.getitem (Slic  (None, 17, 2)       0           ['reshape[0][0]']                
 ingOpLambda)                                                                                     
                                                                                                  
 tf.compat.v1.gather (TFOpLambd  (None, 2)           0           ['tf.__operators__.getitem[0][0]'
 a)                                                              ]                                
                                                                                                  
 tf.compat.v1.gather_1 (TFOpLam  (None, 2)           0           ['tf.__operators__.getitem[0][0]'
 bda)                                                            ]                                
                                                                                                  
 tf.math.multiply (TFOpLambda)  (None, 2)            0           ['tf.compat.v1.gather[0][0]']    
                                                                                                  
 tf.math.multiply_1 (TFOpLambda  (None, 2)           0           ['tf.compat.v1.gather_1[0][0]']  
 )                                                                                                
                                                                                                  
 tf.__operators__.add (TFOpLamb  (None, 2)           0           ['tf.math.multiply[0][0]',       
 da)                                                              'tf.math.multiply_1[0][0]']     
                                                                                                  
 tf.compat.v1.size (TFOpLambda)  ()                  0           ['tf.__operators__.getitem[0][0]'
                                                                 ]                                
                                                                                                  
 tf.expand_dims (TFOpLambda)    (None, 1, 2)         0           ['tf.__operators__.add[0][0]']   
                                                                                                  
 tf.compat.v1.floor_div (TFOpLa  ()                  0           ['tf.compat.v1.size[0][0]']      
 mbda)                                                                                            
                                                                                                  
 tf.broadcast_to (TFOpLambda)   (None, 17, 2)        0           ['tf.expand_dims[0][0]',         
                                                                  'tf.compat.v1.floor_div[0][0]'] 
                                                                                                  
 tf.math.subtract (TFOpLambda)  (None, 17, 2)        0           ['tf.__operators__.getitem[0][0]'
                                                                 , 'tf.broadcast_to[0][0]']       
                                                                                                  
 tf.compat.v1.gather_6 (TFOpLam  (None, 2)           0           ['tf.math.subtract[0][0]']       
 bda)                                                                                             
                                                                                                  
 tf.compat.v1.gather_7 (TFOpLam  (None, 2)           0           ['tf.math.subtract[0][0]']       
 bda)                                                                                             
                                                                                                  
 tf.math.multiply_6 (TFOpLambda  (None, 2)           0           ['tf.compat.v1.gather_6[0][0]']  
 )                                                                                                
                                                                                                  
 tf.math.multiply_7 (TFOpLambda  (None, 2)           0           ['tf.compat.v1.gather_7[0][0]']  
 )                                                                                                
                                                                                                  
 tf.__operators__.add_3 (TFOpLa  (None, 2)           0           ['tf.math.multiply_6[0][0]',     
 mbda)                                                            'tf.math.multiply_7[0][0]']     
                                                                                                  
 tf.compat.v1.size_1 (TFOpLambd  ()                  0           ['tf.math.subtract[0][0]']       
 a)                                                                                               
                                                                                                  
 tf.compat.v1.gather_4 (TFOpLam  (None, 2)           0           ['tf.math.subtract[0][0]']       
 bda)                                                                                             
                                                                                                  
 tf.compat.v1.gather_5 (TFOpLam  (None, 2)           0           ['tf.math.subtract[0][0]']       
 bda)                                                                                             
                                                                                                  
 tf.compat.v1.gather_2 (TFOpLam  (None, 2)           0           ['tf.math.subtract[0][0]']       
 bda)                                                                                             
                                                                                                  
 tf.compat.v1.gather_3 (TFOpLam  (None, 2)           0           ['tf.math.subtract[0][0]']       
 bda)                                                                                             
                                                                                                  
 tf.expand_dims_1 (TFOpLambda)  (None, 1, 2)         0           ['tf.__operators__.add_3[0][0]'] 
                                                                                                  
 tf.compat.v1.floor_div_1 (TFOp  ()                  0           ['tf.compat.v1.size_1[0][0]']    
 Lambda)                                                                                          
                                                                                                  
 tf.math.multiply_4 (TFOpLambda  (None, 2)           0           ['tf.compat.v1.gather_4[0][0]']  
 )                                                                                                
                                                                                                  
 tf.math.multiply_5 (TFOpLambda  (None, 2)           0           ['tf.compat.v1.gather_5[0][0]']  
 )                                                                                                
                                                                                                  
 tf.math.multiply_2 (TFOpLambda  (None, 2)           0           ['tf.compat.v1.gather_2[0][0]']  
 )                                                                                                
                                                                                                  
 tf.math.multiply_3 (TFOpLambda  (None, 2)           0           ['tf.compat.v1.gather_3[0][0]']  
 )                                                                                                
                                                                                                  
 tf.broadcast_to_1 (TFOpLambda)  (None, 17, 2)       0           ['tf.expand_dims_1[0][0]',       
                                                                  'tf.compat.v1.floor_div_1[0][0]'
                                                                 ]                                
                                                                                                  
 tf.__operators__.add_2 (TFOpLa  (None, 2)           0           ['tf.math.multiply_4[0][0]',     
 mbda)                                                            'tf.math.multiply_5[0][0]']     
                                                                                                  
 tf.__operators__.add_1 (TFOpLa  (None, 2)           0           ['tf.math.multiply_2[0][0]',     
 mbda)                                                            'tf.math.multiply_3[0][0]']     
                                                                                                  
 tf.math.subtract_2 (TFOpLambda  (None, 17, 2)       0           ['tf.math.subtract[0][0]',       
 )                                                                'tf.broadcast_to_1[0][0]']      
                                                                                                  
 tf.math.subtract_1 (TFOpLambda  (None, 2)           0           ['tf.__operators__.add_2[0][0]', 
 )                                                                'tf.__operators__.add_1[0][0]'] 
                                                                                                  
 tf.compat.v1.gather_8 (TFOpLam  (17, 2)             0           ['tf.math.subtract_2[0][0]']     
 bda)                                                                                             
                                                                                                  
 tf.compat.v1.norm (TFOpLambda)  ()                  0           ['tf.math.subtract_1[0][0]']     
                                                                                                  
 tf.compat.v1.norm_1 (TFOpLambd  (2,)                0           ['tf.compat.v1.gather_8[0][0]']  
 a)                                                                                               
                                                                                                  
 tf.math.multiply_8 (TFOpLambda  ()                  0           ['tf.compat.v1.norm[0][0]']      
 )                                                                                                
                                                                                                  
 tf.math.reduce_max (TFOpLambda  ()                  0           ['tf.compat.v1.norm_1[0][0]']    
 )                                                                                                
                                                                                                  
 tf.math.maximum (TFOpLambda)   ()                   0           ['tf.math.multiply_8[0][0]',     
                                                                  'tf.math.reduce_max[0][0]']     
                                                                                                  
 tf.math.truediv (TFOpLambda)   (None, 17, 2)        0           ['tf.math.subtract[0][0]',       
                                                                  'tf.math.maximum[0][0]']        
                                                                                                  
 flatten (Flatten)              (None, 34)           0           ['tf.math.truediv[0][0]']        
                                                                                                  
 dense (Dense)                  (None, 128)          4480        ['flatten[0][0]']                
                                                                                                  
 dropout (Dropout)              (None, 128)          0           ['dense[0][0]']                  
                                                                                                  
 dense_1 (Dense)                (None, 64)           8256        ['dropout[0][0]']                
                                                                                                  
 dropout_1 (Dropout)            (None, 64)           0           ['dense_1[0][0]']                
                                                                                                  
 dense_2 (Dense)                (None, 5)            325         ['dropout_1[0][0]']              
                                                                                                  
==================================================================================================
Total params: 13,061
Trainable params: 13,061
Non-trainable params: 0
__________________________________________________________________________________________________
model.compile(
    optimizer='adam',
    loss='categorical_crossentropy',
    metrics=['accuracy']
)

# Add a checkpoint callback to store the checkpoint that has the highest
# validation accuracy.
checkpoint_path = "weights.best.hdf5"
checkpoint = keras.callbacks.ModelCheckpoint(checkpoint_path,
                             monitor='val_accuracy',
                             verbose=1,
                             save_best_only=True,
                             mode='max')
earlystopping = keras.callbacks.EarlyStopping(monitor='val_accuracy', 
                                              patience=20)

# Start training
history = model.fit(X_train, y_train,
                    epochs=200,
                    batch_size=16,
                    validation_data=(X_val, y_val),
                    callbacks=[checkpoint, earlystopping])
Epoch 1/200
34/37 [==========================>...] - ETA: 0s - loss: 1.5441 - accuracy: 0.4210
Epoch 1: val_accuracy improved from -inf to 0.53922, saving model to weights.best.hdf5
37/37 [==============================] - 3s 14ms/step - loss: 1.5336 - accuracy: 0.4273 - val_loss: 1.3971 - val_accuracy: 0.5392
Epoch 2/200
34/37 [==========================>...] - ETA: 0s - loss: 1.2908 - accuracy: 0.5257
Epoch 2: val_accuracy improved from 0.53922 to 0.62745, saving model to weights.best.hdf5
37/37 [==============================] - 0s 6ms/step - loss: 1.2810 - accuracy: 0.5294 - val_loss: 1.0770 - val_accuracy: 0.6275
Epoch 3/200
34/37 [==========================>...] - ETA: 0s - loss: 1.0609 - accuracy: 0.5239
Epoch 3: val_accuracy improved from 0.62745 to 0.63725, saving model to weights.best.hdf5
37/37 [==============================] - 0s 6ms/step - loss: 1.0558 - accuracy: 0.5260 - val_loss: 0.8876 - val_accuracy: 0.6373
Epoch 4/200
35/37 [===========================>..] - ETA: 0s - loss: 0.9032 - accuracy: 0.5839
Epoch 4: val_accuracy improved from 0.63725 to 0.74510, saving model to weights.best.hdf5
37/37 [==============================] - 0s 6ms/step - loss: 0.9003 - accuracy: 0.5865 - val_loss: 0.7593 - val_accuracy: 0.7451
Epoch 5/200
34/37 [==========================>...] - ETA: 0s - loss: 0.7780 - accuracy: 0.7096
Epoch 5: val_accuracy improved from 0.74510 to 0.85294, saving model to weights.best.hdf5
37/37 [==============================] - 0s 6ms/step - loss: 0.7847 - accuracy: 0.7059 - val_loss: 0.6609 - val_accuracy: 0.8529
Epoch 6/200
33/37 [=========================>....] - ETA: 0s - loss: 0.7195 - accuracy: 0.7197
Epoch 6: val_accuracy did not improve from 0.85294
37/37 [==============================] - 0s 4ms/step - loss: 0.7142 - accuracy: 0.7249 - val_loss: 0.5956 - val_accuracy: 0.8529
Epoch 7/200
34/37 [==========================>...] - ETA: 0s - loss: 0.6641 - accuracy: 0.7298
Epoch 7: val_accuracy did not improve from 0.85294
37/37 [==============================] - 0s 4ms/step - loss: 0.6597 - accuracy: 0.7353 - val_loss: 0.5187 - val_accuracy: 0.8529
Epoch 8/200
35/37 [===========================>..] - ETA: 0s - loss: 0.5861 - accuracy: 0.7804
Epoch 8: val_accuracy did not improve from 0.85294
37/37 [==============================] - 0s 4ms/step - loss: 0.5916 - accuracy: 0.7751 - val_loss: 0.4689 - val_accuracy: 0.8529
Epoch 9/200
35/37 [===========================>..] - ETA: 0s - loss: 0.5658 - accuracy: 0.7786
Epoch 9: val_accuracy improved from 0.85294 to 0.89216, saving model to weights.best.hdf5
37/37 [==============================] - 0s 5ms/step - loss: 0.5596 - accuracy: 0.7855 - val_loss: 0.4123 - val_accuracy: 0.8922
Epoch 10/200
35/37 [===========================>..] - ETA: 0s - loss: 0.5019 - accuracy: 0.8518
Epoch 10: val_accuracy improved from 0.89216 to 0.90196, saving model to weights.best.hdf5
37/37 [==============================] - 0s 6ms/step - loss: 0.5007 - accuracy: 0.8512 - val_loss: 0.3702 - val_accuracy: 0.9020
Epoch 11/200
34/37 [==========================>...] - ETA: 0s - loss: 0.4886 - accuracy: 0.8364
Epoch 11: val_accuracy improved from 0.90196 to 0.94118, saving model to weights.best.hdf5
37/37 [==============================] - 0s 6ms/step - loss: 0.4905 - accuracy: 0.8374 - val_loss: 0.3149 - val_accuracy: 0.9412
Epoch 12/200
35/37 [===========================>..] - ETA: 0s - loss: 0.4430 - accuracy: 0.8321
Epoch 12: val_accuracy improved from 0.94118 to 0.95098, saving model to weights.best.hdf5
37/37 [==============================] - 0s 6ms/step - loss: 0.4472 - accuracy: 0.8304 - val_loss: 0.2835 - val_accuracy: 0.9510
Epoch 13/200
35/37 [===========================>..] - ETA: 0s - loss: 0.3976 - accuracy: 0.8786
Epoch 13: val_accuracy did not improve from 0.95098
37/37 [==============================] - 0s 4ms/step - loss: 0.3967 - accuracy: 0.8772 - val_loss: 0.2515 - val_accuracy: 0.9510
Epoch 14/200
35/37 [===========================>..] - ETA: 0s - loss: 0.3466 - accuracy: 0.9107
Epoch 14: val_accuracy improved from 0.95098 to 0.96078, saving model to weights.best.hdf5
37/37 [==============================] - 0s 6ms/step - loss: 0.3441 - accuracy: 0.9100 - val_loss: 0.2233 - val_accuracy: 0.9608
Epoch 15/200
33/37 [=========================>....] - ETA: 0s - loss: 0.3553 - accuracy: 0.8807
Epoch 15: val_accuracy did not improve from 0.96078
37/37 [==============================] - 0s 4ms/step - loss: 0.3506 - accuracy: 0.8875 - val_loss: 0.1971 - val_accuracy: 0.9608
Epoch 16/200
33/37 [=========================>....] - ETA: 0s - loss: 0.3304 - accuracy: 0.9015
Epoch 16: val_accuracy did not improve from 0.96078
37/37 [==============================] - 0s 4ms/step - loss: 0.3276 - accuracy: 0.9031 - val_loss: 0.1730 - val_accuracy: 0.9608
Epoch 17/200
34/37 [==========================>...] - ETA: 0s - loss: 0.3149 - accuracy: 0.9062
Epoch 17: val_accuracy did not improve from 0.96078
37/37 [==============================] - 0s 4ms/step - loss: 0.3139 - accuracy: 0.9048 - val_loss: 0.1555 - val_accuracy: 0.9608
Epoch 18/200
35/37 [===========================>..] - ETA: 0s - loss: 0.2817 - accuracy: 0.9107
Epoch 18: val_accuracy did not improve from 0.96078
37/37 [==============================] - 0s 4ms/step - loss: 0.2845 - accuracy: 0.9083 - val_loss: 0.1501 - val_accuracy: 0.9608
Epoch 19/200
34/37 [==========================>...] - ETA: 0s - loss: 0.2626 - accuracy: 0.9265
Epoch 19: val_accuracy did not improve from 0.96078
37/37 [==============================] - 0s 4ms/step - loss: 0.2633 - accuracy: 0.9239 - val_loss: 0.1410 - val_accuracy: 0.9510
Epoch 20/200
35/37 [===========================>..] - ETA: 0s - loss: 0.2406 - accuracy: 0.9268
Epoch 20: val_accuracy did not improve from 0.96078
37/37 [==============================] - 0s 4ms/step - loss: 0.2352 - accuracy: 0.9291 - val_loss: 0.1334 - val_accuracy: 0.9608
Epoch 21/200
35/37 [===========================>..] - ETA: 0s - loss: 0.2370 - accuracy: 0.9286
Epoch 21: val_accuracy did not improve from 0.96078
37/37 [==============================] - 0s 4ms/step - loss: 0.2363 - accuracy: 0.9273 - val_loss: 0.1215 - val_accuracy: 0.9608
Epoch 22/200
34/37 [==========================>...] - ETA: 0s - loss: 0.2233 - accuracy: 0.9375
Epoch 22: val_accuracy improved from 0.96078 to 0.97059, saving model to weights.best.hdf5
37/37 [==============================] - 0s 6ms/step - loss: 0.2189 - accuracy: 0.9377 - val_loss: 0.1171 - val_accuracy: 0.9706
Epoch 23/200
35/37 [===========================>..] - ETA: 0s - loss: 0.2157 - accuracy: 0.9411
Epoch 23: val_accuracy did not improve from 0.97059
37/37 [==============================] - 0s 4ms/step - loss: 0.2215 - accuracy: 0.9394 - val_loss: 0.1061 - val_accuracy: 0.9706
Epoch 24/200
35/37 [===========================>..] - ETA: 0s - loss: 0.2050 - accuracy: 0.9321
Epoch 24: val_accuracy did not improve from 0.97059
37/37 [==============================] - 0s 4ms/step - loss: 0.2051 - accuracy: 0.9343 - val_loss: 0.1040 - val_accuracy: 0.9706
Epoch 25/200
35/37 [===========================>..] - ETA: 0s - loss: 0.2130 - accuracy: 0.9357
Epoch 25: val_accuracy did not improve from 0.97059
37/37 [==============================] - 0s 4ms/step - loss: 0.2106 - accuracy: 0.9377 - val_loss: 0.0950 - val_accuracy: 0.9706
Epoch 26/200
35/37 [===========================>..] - ETA: 0s - loss: 0.1917 - accuracy: 0.9429
Epoch 26: val_accuracy did not improve from 0.97059
37/37 [==============================] - 0s 4ms/step - loss: 0.1882 - accuracy: 0.9446 - val_loss: 0.0909 - val_accuracy: 0.9706
Epoch 27/200
35/37 [===========================>..] - ETA: 0s - loss: 0.1915 - accuracy: 0.9375
Epoch 27: val_accuracy improved from 0.97059 to 0.98039, saving model to weights.best.hdf5
37/37 [==============================] - 0s 6ms/step - loss: 0.1879 - accuracy: 0.9394 - val_loss: 0.0806 - val_accuracy: 0.9804
Epoch 28/200
35/37 [===========================>..] - ETA: 0s - loss: 0.1590 - accuracy: 0.9464
Epoch 28: val_accuracy did not improve from 0.98039
37/37 [==============================] - 0s 4ms/step - loss: 0.1598 - accuracy: 0.9446 - val_loss: 0.0782 - val_accuracy: 0.9804
Epoch 29/200
35/37 [===========================>..] - ETA: 0s - loss: 0.1646 - accuracy: 0.9464
Epoch 29: val_accuracy did not improve from 0.98039
37/37 [==============================] - 0s 4ms/step - loss: 0.1680 - accuracy: 0.9446 - val_loss: 0.0752 - val_accuracy: 0.9804
Epoch 30/200
34/37 [==========================>...] - ETA: 0s - loss: 0.1720 - accuracy: 0.9467
Epoch 30: val_accuracy did not improve from 0.98039
37/37 [==============================] - 0s 4ms/step - loss: 0.1717 - accuracy: 0.9481 - val_loss: 0.0726 - val_accuracy: 0.9804
Epoch 31/200
34/37 [==========================>...] - ETA: 0s - loss: 0.1733 - accuracy: 0.9338
Epoch 31: val_accuracy did not improve from 0.98039
37/37 [==============================] - 0s 4ms/step - loss: 0.1702 - accuracy: 0.9343 - val_loss: 0.0677 - val_accuracy: 0.9804
Epoch 32/200
34/37 [==========================>...] - ETA: 0s - loss: 0.1494 - accuracy: 0.9596
Epoch 32: val_accuracy did not improve from 0.98039
37/37 [==============================] - 0s 4ms/step - loss: 0.1542 - accuracy: 0.9533 - val_loss: 0.0731 - val_accuracy: 0.9706
Epoch 33/200
34/37 [==========================>...] - ETA: 0s - loss: 0.1662 - accuracy: 0.9504
Epoch 33: val_accuracy improved from 0.98039 to 0.99020, saving model to weights.best.hdf5
37/37 [==============================] - 0s 6ms/step - loss: 0.1706 - accuracy: 0.9481 - val_loss: 0.0658 - val_accuracy: 0.9902
Epoch 34/200
33/37 [=========================>....] - ETA: 0s - loss: 0.1467 - accuracy: 0.9583
Epoch 34: val_accuracy did not improve from 0.99020
37/37 [==============================] - 0s 4ms/step - loss: 0.1440 - accuracy: 0.9585 - val_loss: 0.0624 - val_accuracy: 0.9804
Epoch 35/200
34/37 [==========================>...] - ETA: 0s - loss: 0.1294 - accuracy: 0.9651
Epoch 35: val_accuracy did not improve from 0.99020
37/37 [==============================] - 0s 4ms/step - loss: 0.1288 - accuracy: 0.9654 - val_loss: 0.0609 - val_accuracy: 0.9804
Epoch 36/200
34/37 [==========================>...] - ETA: 0s - loss: 0.1309 - accuracy: 0.9651
Epoch 36: val_accuracy did not improve from 0.99020
37/37 [==============================] - 0s 4ms/step - loss: 0.1321 - accuracy: 0.9637 - val_loss: 0.0556 - val_accuracy: 0.9804
Epoch 37/200
35/37 [===========================>..] - ETA: 0s - loss: 0.1331 - accuracy: 0.9518
Epoch 37: val_accuracy did not improve from 0.99020
37/37 [==============================] - 0s 4ms/step - loss: 0.1333 - accuracy: 0.9516 - val_loss: 0.0532 - val_accuracy: 0.9804
Epoch 38/200
35/37 [===========================>..] - ETA: 0s - loss: 0.1215 - accuracy: 0.9589
Epoch 38: val_accuracy did not improve from 0.99020
37/37 [==============================] - 0s 4ms/step - loss: 0.1230 - accuracy: 0.9585 - val_loss: 0.0511 - val_accuracy: 0.9804
Epoch 39/200
35/37 [===========================>..] - ETA: 0s - loss: 0.1201 - accuracy: 0.9732
Epoch 39: val_accuracy did not improve from 0.99020
37/37 [==============================] - 0s 4ms/step - loss: 0.1179 - accuracy: 0.9740 - val_loss: 0.0472 - val_accuracy: 0.9804
Epoch 40/200
33/37 [=========================>....] - ETA: 0s - loss: 0.1159 - accuracy: 0.9564
Epoch 40: val_accuracy did not improve from 0.99020
37/37 [==============================] - 0s 4ms/step - loss: 0.1170 - accuracy: 0.9550 - val_loss: 0.0445 - val_accuracy: 0.9902
Epoch 41/200
33/37 [=========================>....] - ETA: 0s - loss: 0.1287 - accuracy: 0.9583
Epoch 41: val_accuracy did not improve from 0.99020
37/37 [==============================] - 0s 4ms/step - loss: 0.1251 - accuracy: 0.9585 - val_loss: 0.0451 - val_accuracy: 0.9804
Epoch 42/200
35/37 [===========================>..] - ETA: 0s - loss: 0.1120 - accuracy: 0.9643
Epoch 42: val_accuracy did not improve from 0.99020
37/37 [==============================] - 0s 4ms/step - loss: 0.1095 - accuracy: 0.9654 - val_loss: 0.0443 - val_accuracy: 0.9804
Epoch 43/200
35/37 [===========================>..] - ETA: 0s - loss: 0.1068 - accuracy: 0.9643
Epoch 43: val_accuracy did not improve from 0.99020
37/37 [==============================] - 0s 4ms/step - loss: 0.1054 - accuracy: 0.9654 - val_loss: 0.0445 - val_accuracy: 0.9804
Epoch 44/200
35/37 [===========================>..] - ETA: 0s - loss: 0.0968 - accuracy: 0.9732
Epoch 44: val_accuracy did not improve from 0.99020
37/37 [==============================] - 0s 4ms/step - loss: 0.0968 - accuracy: 0.9740 - val_loss: 0.0415 - val_accuracy: 0.9804
Epoch 45/200
34/37 [==========================>...] - ETA: 0s - loss: 0.0966 - accuracy: 0.9798
Epoch 45: val_accuracy did not improve from 0.99020
37/37 [==============================] - 0s 4ms/step - loss: 0.0938 - accuracy: 0.9810 - val_loss: 0.0376 - val_accuracy: 0.9804
Epoch 46/200
33/37 [=========================>....] - ETA: 0s - loss: 0.0761 - accuracy: 0.9811
Epoch 46: val_accuracy did not improve from 0.99020
37/37 [==============================] - 0s 5ms/step - loss: 0.0780 - accuracy: 0.9827 - val_loss: 0.0371 - val_accuracy: 0.9804
Epoch 47/200
33/37 [=========================>....] - ETA: 0s - loss: 0.0894 - accuracy: 0.9697
Epoch 47: val_accuracy did not improve from 0.99020
37/37 [==============================] - 0s 5ms/step - loss: 0.0904 - accuracy: 0.9706 - val_loss: 0.0345 - val_accuracy: 0.9902
Epoch 48/200
35/37 [===========================>..] - ETA: 0s - loss: 0.0768 - accuracy: 0.9804
Epoch 48: val_accuracy did not improve from 0.99020
37/37 [==============================] - 0s 4ms/step - loss: 0.0753 - accuracy: 0.9810 - val_loss: 0.0355 - val_accuracy: 0.9804
Epoch 49/200
34/37 [==========================>...] - ETA: 0s - loss: 0.0694 - accuracy: 0.9853
Epoch 49: val_accuracy did not improve from 0.99020
37/37 [==============================] - 0s 4ms/step - loss: 0.0695 - accuracy: 0.9844 - val_loss: 0.0394 - val_accuracy: 0.9804
Epoch 50/200
34/37 [==========================>...] - ETA: 0s - loss: 0.0919 - accuracy: 0.9743
Epoch 50: val_accuracy did not improve from 0.99020
37/37 [==============================] - 0s 4ms/step - loss: 0.0899 - accuracy: 0.9740 - val_loss: 0.0347 - val_accuracy: 0.9804
Epoch 51/200
34/37 [==========================>...] - ETA: 0s - loss: 0.0863 - accuracy: 0.9761
Epoch 51: val_accuracy did not improve from 0.99020
37/37 [==============================] - 0s 5ms/step - loss: 0.0878 - accuracy: 0.9758 - val_loss: 0.0400 - val_accuracy: 0.9804
Epoch 52/200
34/37 [==========================>...] - ETA: 0s - loss: 0.0808 - accuracy: 0.9761
Epoch 52: val_accuracy did not improve from 0.99020
37/37 [==============================] - 0s 4ms/step - loss: 0.0817 - accuracy: 0.9758 - val_loss: 0.0308 - val_accuracy: 0.9902
Epoch 53/200
35/37 [===========================>..] - ETA: 0s - loss: 0.0958 - accuracy: 0.9661
Epoch 53: val_accuracy did not improve from 0.99020
37/37 [==============================] - 0s 4ms/step - loss: 0.0967 - accuracy: 0.9654 - val_loss: 0.0345 - val_accuracy: 0.9902
# Visualize the training history to see whether you're overfitting.
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('Model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['TRAIN', 'VAL'], loc='lower right')
plt.show()

png

# Evaluate the model using the TEST dataset
loss, accuracy = model.evaluate(X_test, y_test)
14/14 [==============================] - 0s 3ms/step - loss: 0.0298 - accuracy: 1.0000

Draw the confusion matrix to better understand the model performance

def plot_confusion_matrix(cm, classes,
                          normalize=False,
                          title='Confusion matrix',
                          cmap=plt.cm.Blues):
  """Plots the confusion matrix."""
  if normalize:
    cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
    print("Normalized confusion matrix")
  else:
    print('Confusion matrix, without normalization')

  plt.imshow(cm, interpolation='nearest', cmap=cmap)
  plt.title(title)
  plt.colorbar()
  tick_marks = np.arange(len(classes))
  plt.xticks(tick_marks, classes, rotation=55)
  plt.yticks(tick_marks, classes)
  fmt = '.2f' if normalize else 'd'
  thresh = cm.max() / 2.
  for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
    plt.text(j, i, format(cm[i, j], fmt),
              horizontalalignment="center",
              color="white" if cm[i, j] > thresh else "black")

  plt.ylabel('True label')
  plt.xlabel('Predicted label')
  plt.tight_layout()

# Classify pose in the TEST dataset using the trained model
y_pred = model.predict(X_test)

# Convert the prediction result to class name
y_pred_label = [class_names[i] for i in np.argmax(y_pred, axis=1)]
y_true_label = [class_names[i] for i in np.argmax(y_test, axis=1)]

# Plot the confusion matrix
cm = confusion_matrix(np.argmax(y_test, axis=1), np.argmax(y_pred, axis=1))
plot_confusion_matrix(cm,
                      class_names,
                      title ='Confusion Matrix of Pose Classification Model')

# Print the classification report
print('\nClassification Report:\n', classification_report(y_true_label,
                                                          y_pred_label))
14/14 [==============================] - 0s 2ms/step
Confusion matrix, without normalization

Classification Report:
               precision    recall  f1-score   support

       chair       1.00      1.00      1.00        84
       cobra       1.00      1.00      1.00        93
         dog       1.00      1.00      1.00        84
        tree       1.00      1.00      1.00        96
     warrior       1.00      1.00      1.00        68

    accuracy                           1.00       425
   macro avg       1.00      1.00      1.00       425
weighted avg       1.00      1.00      1.00       425

png

(Optional) Investigate incorrect predictions

You can look at the poses from the TEST dataset that were incorrectly predicted to see whether the model accuracy can be improved.

if is_skip_step_1:
  raise RuntimeError('You must have run step 1 to run this cell.')

# If step 1 was skipped, skip this step.
IMAGE_PER_ROW = 3
MAX_NO_OF_IMAGE_TO_PLOT = 30

# Extract the list of incorrectly predicted poses
false_predict = [id_in_df for id_in_df in range(len(y_test)) \
                if y_pred_label[id_in_df] != y_true_label[id_in_df]]
if len(false_predict) > MAX_NO_OF_IMAGE_TO_PLOT:
  false_predict = false_predict[:MAX_NO_OF_IMAGE_TO_PLOT]

# Plot the incorrectly predicted images
row_count = len(false_predict) // IMAGE_PER_ROW + 1
fig = plt.figure(figsize=(10 * IMAGE_PER_ROW, 10 * row_count))
for i, id_in_df in enumerate(false_predict):
  ax = fig.add_subplot(row_count, IMAGE_PER_ROW, i + 1)
  image_path = os.path.join(images_out_test_folder,
                            df_test.iloc[id_in_df]['file_name'])

  image = cv2.imread(image_path)
  plt.title("Predict: %s; Actual: %s"
            % (y_pred_label[id_in_df], y_true_label[id_in_df]))
  plt.imshow(cv2.cvtColor(image, cv2.COLOR_BGR2RGB))
plt.show()
<Figure size 2160x720 with 0 Axes>

Part 3: Convert the pose classification model to TensorFlow Lite

You'll convert the Keras pose classification model to the TensorFlow Lite format so that you can deploy it to mobile apps, web browsers and edge devices. When converting the model, you'll apply dynamic range quantization to reduce the pose classification TensorFlow Lite model size by about 4 times with insignificant accuracy loss.

converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_model = converter.convert()

print('Model size: %dKB' % (len(tflite_model) / 1024))

with open('pose_classifier.tflite', 'wb') as f:
  f.write(tflite_model)
INFO:tensorflow:Assets written to: /tmpfs/tmp/tmpuv_ht6s4/assets
Model size: 26KB
2022-08-09 17:05:25.064814: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:362] Ignored output_format.
2022-08-09 17:05:25.064855: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:365] Ignored drop_control_dependency.

Then you'll write the label file which contains mapping from the class indexes to the human readable class names.

with open('pose_labels.txt', 'w') as f:
  f.write('\n'.join(class_names))

As you've applied quantization to reduce the model size, let's evaluate the quantized TFLite model to check whether the accuracy drop is acceptable.

def evaluate_model(interpreter, X, y_true):
  """Evaluates the given TFLite model and return its accuracy."""
  input_index = interpreter.get_input_details()[0]["index"]
  output_index = interpreter.get_output_details()[0]["index"]

  # Run predictions on all given poses.
  y_pred = []
  for i in range(len(y_true)):
    # Pre-processing: add batch dimension and convert to float32 to match with
    # the model's input data format.
    test_image = X[i: i + 1].astype('float32')
    interpreter.set_tensor(input_index, test_image)

    # Run inference.
    interpreter.invoke()

    # Post-processing: remove batch dimension and find the class with highest
    # probability.
    output = interpreter.tensor(output_index)
    predicted_label = np.argmax(output()[0])
    y_pred.append(predicted_label)

  # Compare prediction results with ground truth labels to calculate accuracy.
  y_pred = keras.utils.to_categorical(y_pred)
  return accuracy_score(y_true, y_pred)

# Evaluate the accuracy of the converted TFLite model
classifier_interpreter = tf.lite.Interpreter(model_content=tflite_model)
classifier_interpreter.allocate_tensors()
print('Accuracy of TFLite model: %s' %
      evaluate_model(classifier_interpreter, X_test, y_test))
Accuracy of TFLite model: 1.0

Now you can download the TFLite model (pose_classifier.tflite) and the label file (pose_labels.txt) to classify custom poses. See the Android and Python/Raspberry Pi sample app for an end-to-end example of how to use the TFLite pose classification model.

zip pose_classifier.zip pose_labels.txt pose_classifier.tflite
adding: pose_labels.txt (stored 0%)
  adding: pose_classifier.tflite (deflated 34%)
# Download the zip archive if running on Colab.
try:
  from google.colab import files
  files.download('pose_classifier.zip')
except:
  pass