This notebook teaches you how to train a pose classification model using MoveNet and TensorFlow Lite. The result is a new TensorFlow Lite model that accepts the output from the MoveNet model as its input, and outputs a pose classification, such as the name of a yoga pose.
The procedure in this notebook consists of 3 parts:
- Part 1: Preprocess the pose classification training data into a CSV file that specifies the landmarks (body keypoints) detected by the MoveNet model, along with the ground truth pose labels.
- Part 2: Build and train a pose classification model that takes the landmark coordinates from the CSV file as input, and outputs the predicted labels.
- Part 3: Convert the pose classification model to TFLite.
By default, this notebook uses an image dataset with labeled yoga poses, but we've also included a section in Part 1 where you can upload your own image dataset of poses.
![]() |
![]() |
![]() |
![]() |
![]() |
Preparation
In this section, you'll import the necessary libraries and define several functions to preprocess the training images into a CSV file that contains the landmark coordinates and ground truth labels.
Nothing observable happens here, but you can expand the hidden code cells to see the implementation for some of the functions we'll be calling later on.
If you only want to create the CSV file without knowing all the details, just run this section and proceed to Part 1.
pip install -q opencv-python
import csv
import cv2
import itertools
import numpy as np
import pandas as pd
import os
import sys
import tempfile
import tqdm
from matplotlib import pyplot as plt
from matplotlib.collections import LineCollection
import tensorflow as tf
import tensorflow_hub as hub
from tensorflow import keras
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, classification_report, confusion_matrix
2022-10-20 11:11:54.305406: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/cv2/../../lib64: 2022-10-20 11:11:54.305527: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/cv2/../../lib64: 2022-10-20 11:11:54.305538: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
Code to run pose estimation using MoveNet
Functions to run pose estimation with MoveNet
# Download model from TF Hub and check out inference code from GitHub
!wget -q -O movenet_thunder.tflite https://tfhub.dev/google/lite-model/movenet/singlepose/thunder/tflite/float16/4?lite-format=tflite
!git clone https://github.com/tensorflow/examples.git
pose_sample_rpi_path = os.path.join(os.getcwd(), 'examples/lite/examples/pose_estimation/raspberry_pi')
sys.path.append(pose_sample_rpi_path)
# Load MoveNet Thunder model
import utils
from data import BodyPart
from ml import Movenet
movenet = Movenet('movenet_thunder')
# Define function to run pose estimation using MoveNet Thunder.
# You'll apply MoveNet's cropping algorithm and run inference multiple times on
# the input image to improve pose estimation accuracy.
def detect(input_tensor, inference_count=3):
"""Runs detection on an input image.
Args:
input_tensor: A [height, width, 3] Tensor of type tf.float32.
Note that height and width can be anything since the image will be
immediately resized according to the needs of the model within this
function.
inference_count: Number of times the model should run repeatly on the
same input image to improve detection accuracy.
Returns:
A Person entity detected by the MoveNet.SinglePose.
"""
image_height, image_width, channel = input_tensor.shape
# Detect pose using the full input image
movenet.detect(input_tensor.numpy(), reset_crop_region=True)
# Repeatedly using previous detection result to identify the region of
# interest and only croping that region to improve detection accuracy
for _ in range(inference_count - 1):
person = movenet.detect(input_tensor.numpy(),
reset_crop_region=False)
return person
Cloning into 'examples'... remote: Enumerating objects: 22666, done.[K remote: Counting objects: 100% (722/722), done.[K remote: Compressing objects: 100% (360/360), done.[K remote: Total 22666 (delta 284), reused 607 (delta 251), pack-reused 21944[K Receiving objects: 100% (22666/22666), 41.77 MiB | 27.06 MiB/s, done. Resolving deltas: 100% (12346/12346), done. INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
Functions to visualize the pose estimation results.
def draw_prediction_on_image(
image, person, crop_region=None, close_figure=True,
keep_input_size=False):
"""Draws the keypoint predictions on image.
Args:
image: An numpy array with shape [height, width, channel] representing the
pixel values of the input image.
person: A person entity returned from the MoveNet.SinglePose model.
close_figure: Whether to close the plt figure after the function returns.
keep_input_size: Whether to keep the size of the input image.
Returns:
An numpy array with shape [out_height, out_width, channel] representing the
image overlaid with keypoint predictions.
"""
# Draw the detection result on top of the image.
image_np = utils.visualize(image, [person])
# Plot the image with detection results.
height, width, channel = image.shape
aspect_ratio = float(width) / height
fig, ax = plt.subplots(figsize=(12 * aspect_ratio, 12))
im = ax.imshow(image_np)
if close_figure:
plt.close(fig)
if not keep_input_size:
image_np = utils.keep_aspect_ratio_resizer(image_np, (512, 512))
return image_np
Code to load the images, detect pose landmarks and save them into a CSV file
class MoveNetPreprocessor(object):
"""Helper class to preprocess pose sample images for classification."""
def __init__(self,
images_in_folder,
images_out_folder,
csvs_out_path):
"""Creates a preprocessor to detection pose from images and save as CSV.
Args:
images_in_folder: Path to the folder with the input images. It should
follow this structure:
yoga_poses
|__ downdog
|______ 00000128.jpg
|______ 00000181.bmp
|______ ...
|__ goddess
|______ 00000243.jpg
|______ 00000306.jpg
|______ ...
...
images_out_folder: Path to write the images overlay with detected
landmarks. These images are useful when you need to debug accuracy
issues.
csvs_out_path: Path to write the CSV containing the detected landmark
coordinates and label of each image that can be used to train a pose
classification model.
"""
self._images_in_folder = images_in_folder
self._images_out_folder = images_out_folder
self._csvs_out_path = csvs_out_path
self._messages = []
# Create a temp dir to store the pose CSVs per class
self._csvs_out_folder_per_class = tempfile.mkdtemp()
# Get list of pose classes and print image statistics
self._pose_class_names = sorted(
[n for n in os.listdir(self._images_in_folder) if not n.startswith('.')]
)
def process(self, per_pose_class_limit=None, detection_threshold=0.1):
"""Preprocesses images in the given folder.
Args:
per_pose_class_limit: Number of images to load. As preprocessing usually
takes time, this parameter can be specified to make the reduce of the
dataset for testing.
detection_threshold: Only keep images with all landmark confidence score
above this threshold.
"""
# Loop through the classes and preprocess its images
for pose_class_name in self._pose_class_names:
print('Preprocessing', pose_class_name, file=sys.stderr)
# Paths for the pose class.
images_in_folder = os.path.join(self._images_in_folder, pose_class_name)
images_out_folder = os.path.join(self._images_out_folder, pose_class_name)
csv_out_path = os.path.join(self._csvs_out_folder_per_class,
pose_class_name + '.csv')
if not os.path.exists(images_out_folder):
os.makedirs(images_out_folder)
# Detect landmarks in each image and write it to a CSV file
with open(csv_out_path, 'w') as csv_out_file:
csv_out_writer = csv.writer(csv_out_file,
delimiter=',',
quoting=csv.QUOTE_MINIMAL)
# Get list of images
image_names = sorted(
[n for n in os.listdir(images_in_folder) if not n.startswith('.')])
if per_pose_class_limit is not None:
image_names = image_names[:per_pose_class_limit]
valid_image_count = 0
# Detect pose landmarks from each image
for image_name in tqdm.tqdm(image_names):
image_path = os.path.join(images_in_folder, image_name)
try:
image = tf.io.read_file(image_path)
image = tf.io.decode_jpeg(image)
except:
self._messages.append('Skipped ' + image_path + '. Invalid image.')
continue
else:
image = tf.io.read_file(image_path)
image = tf.io.decode_jpeg(image)
image_height, image_width, channel = image.shape
# Skip images that isn't RGB because Movenet requires RGB images
if channel != 3:
self._messages.append('Skipped ' + image_path +
'. Image isn\'t in RGB format.')
continue
person = detect(image)
# Save landmarks if all landmarks were detected
min_landmark_score = min(
[keypoint.score for keypoint in person.keypoints])
should_keep_image = min_landmark_score >= detection_threshold
if not should_keep_image:
self._messages.append('Skipped ' + image_path +
'. No pose was confidentlly detected.')
continue
valid_image_count += 1
# Draw the prediction result on top of the image for debugging later
output_overlay = draw_prediction_on_image(
image.numpy().astype(np.uint8), person,
close_figure=True, keep_input_size=True)
# Write detection result into an image file
output_frame = cv2.cvtColor(output_overlay, cv2.COLOR_RGB2BGR)
cv2.imwrite(os.path.join(images_out_folder, image_name), output_frame)
# Get landmarks and scale it to the same size as the input image
pose_landmarks = np.array(
[[keypoint.coordinate.x, keypoint.coordinate.y, keypoint.score]
for keypoint in person.keypoints],
dtype=np.float32)
# Write the landmark coordinates to its per-class CSV file
coordinates = pose_landmarks.flatten().astype(np.str).tolist()
csv_out_writer.writerow([image_name] + coordinates)
if not valid_image_count:
raise RuntimeError(
'No valid images found for the "{}" class.'
.format(pose_class_name))
# Print the error message collected during preprocessing.
print('\n'.join(self._messages))
# Combine all per-class CSVs into a single output file
all_landmarks_df = self._all_landmarks_as_dataframe()
all_landmarks_df.to_csv(self._csvs_out_path, index=False)
def class_names(self):
"""List of classes found in the training dataset."""
return self._pose_class_names
def _all_landmarks_as_dataframe(self):
"""Merge all per-class CSVs into a single dataframe."""
total_df = None
for class_index, class_name in enumerate(self._pose_class_names):
csv_out_path = os.path.join(self._csvs_out_folder_per_class,
class_name + '.csv')
per_class_df = pd.read_csv(csv_out_path, header=None)
# Add the labels
per_class_df['class_no'] = [class_index]*len(per_class_df)
per_class_df['class_name'] = [class_name]*len(per_class_df)
# Append the folder name to the filename column (first column)
per_class_df[per_class_df.columns[0]] = (os.path.join(class_name, '')
+ per_class_df[per_class_df.columns[0]].astype(str))
if total_df is None:
# For the first class, assign its data to the total dataframe
total_df = per_class_df
else:
# Concatenate each class's data into the total dataframe
total_df = pd.concat([total_df, per_class_df], axis=0)
list_name = [[bodypart.name + '_x', bodypart.name + '_y',
bodypart.name + '_score'] for bodypart in BodyPart]
header_name = []
for columns_name in list_name:
header_name += columns_name
header_name = ['file_name'] + header_name
header_map = {total_df.columns[i]: header_name[i]
for i in range(len(header_name))}
total_df.rename(header_map, axis=1, inplace=True)
return total_df
(Optional) Code snippet to try out the Movenet pose estimation logic
test_image_url = "https://cdn.pixabay.com/photo/2017/03/03/17/30/yoga-2114512_960_720.jpg"
!wget -O /tmp/image.jpeg {test_image_url}
if len(test_image_url):
image = tf.io.read_file('/tmp/image.jpeg')
image = tf.io.decode_jpeg(image)
person = detect(image)
_ = draw_prediction_on_image(image.numpy(), person, crop_region=None,
close_figure=False, keep_input_size=True)
--2022-10-20 11:11:59-- https://cdn.pixabay.com/photo/2017/03/03/17/30/yoga-2114512_960_720.jpg Resolving cdn.pixabay.com (cdn.pixabay.com)... 104.18.10.234, 104.18.11.234, 2606:4700::6812:bea, ... Connecting to cdn.pixabay.com (cdn.pixabay.com)|104.18.10.234|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 29024 (28K) [binary/octet-stream] Saving to: ‘/tmp/image.jpeg’ /tmp/image.jpeg 100%[===================>] 28.34K --.-KB/s in 0s 2022-10-20 11:11:59 (84.7 MB/s) - ‘/tmp/image.jpeg’ saved [29024/29024]
Part 1: Preprocess the input images
Because the input for our pose classifier is the output landmarks from the MoveNet model, we need to generate our training dataset by running labeled images through MoveNet and then capturing all the landmark data and ground truth labels into a CSV file.
The dataset we've provided for this tutorial is a CG-generated yoga pose dataset. It contains images of multiple CG-generated models doing 5 different yoga poses. The directory is already split into a train
dataset and a test
dataset.
So in this section, we'll download the yoga dataset and run it through MoveNet so we can capture all the landmarks into a CSV file... However, it takes about 15 minutes to feed our yoga dataset to MoveNet and generate this CSV file. So as an alternative, you can download a pre-existing CSV file for the yoga dataset by setting is_skip_step_1
parameter below to True. That way, you'll skip this step and instead download the same CSV file that will be created in this preprocessing step.
On the other hand, if you want to train the pose classifier with your own image dataset, you need to upload your images and run this preprocessing step (leave is_skip_step_1
False)—follow the instructions below to upload your own pose dataset.
is_skip_step_1 = False
(Optional) Upload your own pose dataset
use_custom_dataset = False
dataset_is_split = False
If you want to train the pose classifier with your own labeled poses (they can be any poses, not just yoga poses), follow these steps:
Set the above
use_custom_dataset
option to True.Prepare an archive file (ZIP, TAR, or other) that includes a folder with your images dataset. The folder must include sorted images of your poses as follows.
If you've already split your dataset into train and test sets, then set
dataset_is_split
to True. That is, your images folder must include "train" and "test" directories like this:yoga_poses/ |__ train/ |__ downdog/ |______ 00000128.jpg |______ ... |__ test/ |__ downdog/ |______ 00000181.jpg |______ ...
Or, if your dataset is NOT split yet, then set
dataset_is_split
to False and we'll split it up based on a specified split fraction. That is, your uploaded images folder should look like this:yoga_poses/ |__ downdog/ |______ 00000128.jpg |______ 00000181.jpg |______ ... |__ goddess/ |______ 00000243.jpg |______ 00000306.jpg |______ ...
Click the Files tab on the left (folder icon) and then click Upload to session storage (file icon).
Select your archive file and wait until it finishes uploading before you proceed.
Edit the following code block to specify the name of your archive file and images directory. (By default, we expect a ZIP file, so you'll need to also modify that part if your archive is another format.)
Now run the rest of the notebook.
import os
import random
import shutil
def split_into_train_test(images_origin, images_dest, test_split):
"""Splits a directory of sorted images into training and test sets.
Args:
images_origin: Path to the directory with your images. This directory
must include subdirectories for each of your labeled classes. For example:
yoga_poses/
|__ downdog/
|______ 00000128.jpg
|______ 00000181.jpg
|______ ...
|__ goddess/
|______ 00000243.jpg
|______ 00000306.jpg
|______ ...
...
images_dest: Path to a directory where you want the split dataset to be
saved. The results looks like this:
split_yoga_poses/
|__ train/
|__ downdog/
|______ 00000128.jpg
|______ ...
|__ test/
|__ downdog/
|______ 00000181.jpg
|______ ...
test_split: Fraction of data to reserve for test (float between 0 and 1).
"""
_, dirs, _ = next(os.walk(images_origin))
TRAIN_DIR = os.path.join(images_dest, 'train')
TEST_DIR = os.path.join(images_dest, 'test')
os.makedirs(TRAIN_DIR, exist_ok=True)
os.makedirs(TEST_DIR, exist_ok=True)
for dir in dirs:
# Get all filenames for this dir, filtered by filetype
filenames = os.listdir(os.path.join(images_origin, dir))
filenames = [os.path.join(images_origin, dir, f) for f in filenames if (
f.endswith('.png') or f.endswith('.jpg') or f.endswith('.jpeg') or f.endswith('.bmp'))]
# Shuffle the files, deterministically
filenames.sort()
random.seed(42)
random.shuffle(filenames)
# Divide them into train/test dirs
os.makedirs(os.path.join(TEST_DIR, dir), exist_ok=True)
os.makedirs(os.path.join(TRAIN_DIR, dir), exist_ok=True)
test_count = int(len(filenames) * test_split)
for i, file in enumerate(filenames):
if i < test_count:
destination = os.path.join(TEST_DIR, dir, os.path.split(file)[1])
else:
destination = os.path.join(TRAIN_DIR, dir, os.path.split(file)[1])
shutil.copyfile(file, destination)
print(f'Moved {test_count} of {len(filenames)} from class "{dir}" into test.')
print(f'Your split dataset is in "{images_dest}"')
if use_custom_dataset:
# ATTENTION:
# You must edit these two lines to match your archive and images folder name:
# !tar -xf YOUR_DATASET_ARCHIVE_NAME.tar
!unzip -q YOUR_DATASET_ARCHIVE_NAME.zip
dataset_in = 'YOUR_DATASET_DIR_NAME'
# You can leave the rest alone:
if not os.path.isdir(dataset_in):
raise Exception("dataset_in is not a valid directory")
if dataset_is_split:
IMAGES_ROOT = dataset_in
else:
dataset_out = 'split_' + dataset_in
split_into_train_test(dataset_in, dataset_out, test_split=0.2)
IMAGES_ROOT = dataset_out
Download the yoga dataset
if not is_skip_step_1 and not use_custom_dataset:
!wget -O yoga_poses.zip http://download.tensorflow.org/data/pose_classification/yoga_poses.zip
!unzip -q yoga_poses.zip -d yoga_cg
IMAGES_ROOT = "yoga_cg"
--2022-10-20 11:12:03-- http://download.tensorflow.org/data/pose_classification/yoga_poses.zip Resolving download.tensorflow.org (download.tensorflow.org)... 74.125.142.128, 2607:f8b0:400e:c08::80 Connecting to download.tensorflow.org (download.tensorflow.org)|74.125.142.128|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 102517581 (98M) [application/zip] Saving to: ‘yoga_poses.zip’ yoga_poses.zip 100%[===================>] 97.77M 95.0MB/s in 1.0s 2022-10-20 11:12:04 (95.0 MB/s) - ‘yoga_poses.zip’ saved [102517581/102517581]
Preprocess the TRAIN
dataset
if not is_skip_step_1:
images_in_train_folder = os.path.join(IMAGES_ROOT, 'train')
images_out_train_folder = 'poses_images_out_train'
csvs_out_train_path = 'train_data.csv'
preprocessor = MoveNetPreprocessor(
images_in_folder=images_in_train_folder,
images_out_folder=images_out_train_folder,
csvs_out_path=csvs_out_train_path,
)
preprocessor.process(per_pose_class_limit=None)
Preprocessing chair 0%| | 0/200 [00:00<?, ?it/s]/tmpfs/tmp/ipykernel_8630/1291585794.py:128: DeprecationWarning: `np.str` is a deprecated alias for the builtin `str`. To silence this warning, use `str` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.str_` here. Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations coordinates = pose_landmarks.flatten().astype(np.str).tolist() 100%|██████████| 200/200 [00:15<00:00, 12.81it/s] Preprocessing cobra 100%|██████████| 200/200 [00:14<00:00, 13.52it/s] Preprocessing dog 100%|██████████| 200/200 [00:15<00:00, 13.23it/s] Preprocessing tree 100%|██████████| 200/200 [00:15<00:00, 12.67it/s] Preprocessing warrior 100%|██████████| 200/200 [00:13<00:00, 14.51it/s] Skipped yoga_cg/train/chair/girl3_chair091.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/chair/girl3_chair092.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/chair/girl3_chair093.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/chair/girl3_chair094.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/chair/girl3_chair096.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/chair/girl3_chair097.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/chair/girl3_chair099.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/chair/girl3_chair100.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/chair/girl3_chair104.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/chair/girl3_chair106.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/chair/girl3_chair110.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/chair/girl3_chair114.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/chair/girl3_chair115.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/chair/girl3_chair118.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/chair/girl3_chair122.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/chair/girl3_chair123.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/chair/girl3_chair124.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/chair/girl3_chair125.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/chair/girl3_chair131.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/chair/girl3_chair132.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/chair/girl3_chair133.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/chair/girl3_chair134.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/chair/girl3_chair136.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/chair/girl3_chair138.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/chair/girl3_chair139.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/chair/girl3_chair142.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/chair/guy2_chair089.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/chair/guy2_chair136.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/chair/guy2_chair140.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/chair/guy2_chair143.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/chair/guy2_chair144.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/chair/guy2_chair145.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/chair/guy2_chair146.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl1_cobra026.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl1_cobra029.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl1_cobra030.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl1_cobra038.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl1_cobra040.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl1_cobra041.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl1_cobra048.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl1_cobra050.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl1_cobra051.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl1_cobra055.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl1_cobra059.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl1_cobra061.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl1_cobra068.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl1_cobra070.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl1_cobra081.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl1_cobra087.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl1_cobra088.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl1_cobra089.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl1_cobra090.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl1_cobra091.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl1_cobra092.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl1_cobra093.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl1_cobra094.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl1_cobra096.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl1_cobra099.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl1_cobra102.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl1_cobra110.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl1_cobra112.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl1_cobra115.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl1_cobra119.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl1_cobra122.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl1_cobra128.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl1_cobra129.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl1_cobra136.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl1_cobra140.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl2_cobra029.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl2_cobra046.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl2_cobra050.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl2_cobra053.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl2_cobra108.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl2_cobra117.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl2_cobra129.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl2_cobra133.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl2_cobra136.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl2_cobra140.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl3_cobra028.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl3_cobra030.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl3_cobra032.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl3_cobra039.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl3_cobra040.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl3_cobra051.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl3_cobra052.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl3_cobra058.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl3_cobra062.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl3_cobra068.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl3_cobra072.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl3_cobra076.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl3_cobra078.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl3_cobra079.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl3_cobra082.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl3_cobra088.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl3_cobra092.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl3_cobra097.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl3_cobra099.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl3_cobra107.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl3_cobra129.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl3_cobra130.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl3_cobra132.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl3_cobra134.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/girl3_cobra138.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/guy2_cobra034.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/guy2_cobra042.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/guy2_cobra043.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/guy2_cobra047.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/guy2_cobra053.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/guy2_cobra065.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/guy2_cobra077.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/guy2_cobra078.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/guy2_cobra080.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/guy2_cobra081.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/guy2_cobra084.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/guy2_cobra088.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/guy2_cobra089.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/guy2_cobra102.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/guy2_cobra105.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/guy2_cobra108.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/cobra/guy2_cobra139.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/girl1_dog027.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/girl1_dog028.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/girl1_dog030.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/girl1_dog032.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/girl2_dog075.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/girl2_dog080.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/girl2_dog083.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/girl2_dog085.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/girl2_dog087.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/girl2_dog088.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/girl2_dog090.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/girl2_dog091.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/girl2_dog093.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/girl2_dog094.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/girl2_dog095.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/girl2_dog099.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/girl2_dog100.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/girl2_dog101.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/girl2_dog103.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/girl2_dog104.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/girl2_dog105.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/girl2_dog107.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/girl2_dog111.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/girl3_dog025.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/girl3_dog026.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/girl3_dog027.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/girl3_dog028.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/girl3_dog031.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/girl3_dog033.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/girl3_dog035.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/girl3_dog037.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/girl3_dog040.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/girl3_dog041.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/girl3_dog047.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/girl3_dog052.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/girl3_dog062.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/girl3_dog072.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/girl3_dog074.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/girl3_dog075.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/girl3_dog077.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/girl3_dog081.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/girl3_dog082.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/girl3_dog086.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/girl3_dog088.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/girl3_dog090.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/girl3_dog092.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/girl3_dog094.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/girl3_dog095.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/girl3_dog096.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/girl3_dog100.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/girl3_dog102.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/girl3_dog103.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/girl3_dog104.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/girl3_dog106.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/girl3_dog107.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/girl3_dog111.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/guy1_dog070.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/guy1_dog076.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/guy2_dog070.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/guy2_dog071.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/dog/guy2_dog082.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/tree/girl2_tree119.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/tree/girl2_tree122.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/tree/girl2_tree161.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/tree/girl2_tree163.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/tree/guy1_tree139.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/tree/guy1_tree140.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/tree/guy1_tree141.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/tree/guy1_tree143.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/tree/guy2_tree085.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/tree/guy2_tree086.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/tree/guy2_tree087.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/tree/guy2_tree088.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/tree/guy2_tree090.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/tree/guy2_tree092.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/tree/guy2_tree145.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/tree/guy2_tree147.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl1_warrior049.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl1_warrior053.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl1_warrior064.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl1_warrior066.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl1_warrior067.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl1_warrior072.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl1_warrior075.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl1_warrior077.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl1_warrior080.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl1_warrior083.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl1_warrior084.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl1_warrior087.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl1_warrior089.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl1_warrior093.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl1_warrior094.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl1_warrior095.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl1_warrior098.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl1_warrior099.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl1_warrior100.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl1_warrior103.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl1_warrior108.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl1_warrior109.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl1_warrior111.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl1_warrior112.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl1_warrior113.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl1_warrior114.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl1_warrior116.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl1_warrior117.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl2_warrior047.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl2_warrior049.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl2_warrior050.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl2_warrior052.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl2_warrior057.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl2_warrior058.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl2_warrior063.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl2_warrior068.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl2_warrior079.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl2_warrior083.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl2_warrior085.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl2_warrior088.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl2_warrior092.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl2_warrior096.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl2_warrior097.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl2_warrior102.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl2_warrior106.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl2_warrior108.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl3_warrior042.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl3_warrior043.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl3_warrior047.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl3_warrior049.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl3_warrior051.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl3_warrior054.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl3_warrior056.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl3_warrior057.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl3_warrior061.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl3_warrior066.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl3_warrior067.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl3_warrior073.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl3_warrior074.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl3_warrior075.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl3_warrior079.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl3_warrior087.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl3_warrior089.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl3_warrior090.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl3_warrior091.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl3_warrior092.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl3_warrior094.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl3_warrior095.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl3_warrior096.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl3_warrior100.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl3_warrior103.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl3_warrior107.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl3_warrior115.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl3_warrior117.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl3_warrior134.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl3_warrior140.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/girl3_warrior143.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/guy1_warrior043.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/guy1_warrior048.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/guy1_warrior051.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/guy1_warrior052.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/guy1_warrior055.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/guy1_warrior057.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/guy1_warrior062.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/guy1_warrior068.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/guy1_warrior069.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/guy1_warrior073.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/guy1_warrior076.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/guy1_warrior077.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/guy1_warrior080.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/guy1_warrior081.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/guy1_warrior082.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/guy1_warrior088.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/guy1_warrior091.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/guy1_warrior092.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/guy1_warrior093.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/guy1_warrior094.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/guy1_warrior097.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/guy1_warrior118.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/guy1_warrior120.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/guy1_warrior121.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/guy1_warrior124.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/guy1_warrior125.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/guy1_warrior126.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/guy1_warrior131.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/guy1_warrior134.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/guy1_warrior135.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/guy1_warrior138.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/guy1_warrior143.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/guy1_warrior145.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/guy1_warrior148.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/guy2_warrior051.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/guy2_warrior086.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/guy2_warrior111.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/guy2_warrior118.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/guy2_warrior122.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/guy2_warrior129.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/guy2_warrior131.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/guy2_warrior135.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/guy2_warrior137.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/guy2_warrior139.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/guy2_warrior145.jpg. No pose was confidentlly detected. Skipped yoga_cg/train/warrior/guy2_warrior148.jpg. No pose was confidentlly detected.
Preprocess the TEST
dataset
if not is_skip_step_1:
images_in_test_folder = os.path.join(IMAGES_ROOT, 'test')
images_out_test_folder = 'poses_images_out_test'
csvs_out_test_path = 'test_data.csv'
preprocessor = MoveNetPreprocessor(
images_in_folder=images_in_test_folder,
images_out_folder=images_out_test_folder,
csvs_out_path=csvs_out_test_path,
)
preprocessor.process(per_pose_class_limit=None)
Preprocessing chair 0%| | 0/84 [00:00<?, ?it/s]/tmpfs/tmp/ipykernel_8630/1291585794.py:128: DeprecationWarning: `np.str` is a deprecated alias for the builtin `str`. To silence this warning, use `str` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.str_` here. Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations coordinates = pose_landmarks.flatten().astype(np.str).tolist() 100%|██████████| 84/84 [00:07<00:00, 10.98it/s] Preprocessing cobra 100%|██████████| 116/116 [00:08<00:00, 13.25it/s] Preprocessing dog 100%|██████████| 90/90 [00:07<00:00, 12.61it/s] Preprocessing tree 100%|██████████| 96/96 [00:07<00:00, 12.21it/s] Preprocessing warrior 100%|██████████| 109/109 [00:08<00:00, 13.11it/s] Skipped yoga_cg/test/cobra/guy3_cobra048.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/cobra/guy3_cobra050.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/cobra/guy3_cobra051.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/cobra/guy3_cobra052.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/cobra/guy3_cobra053.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/cobra/guy3_cobra054.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/cobra/guy3_cobra055.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/cobra/guy3_cobra056.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/cobra/guy3_cobra057.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/cobra/guy3_cobra058.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/cobra/guy3_cobra059.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/cobra/guy3_cobra060.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/cobra/guy3_cobra062.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/cobra/guy3_cobra069.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/cobra/guy3_cobra075.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/cobra/guy3_cobra077.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/cobra/guy3_cobra081.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/cobra/guy3_cobra124.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/cobra/guy3_cobra131.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/cobra/guy3_cobra132.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/cobra/guy3_cobra134.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/cobra/guy3_cobra135.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/cobra/guy3_cobra136.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/dog/guy3_dog025.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/dog/guy3_dog026.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/dog/guy3_dog036.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/dog/guy3_dog042.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/dog/guy3_dog106.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/dog/guy3_dog108.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/warrior/guy3_warrior042.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/warrior/guy3_warrior043.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/warrior/guy3_warrior044.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/warrior/guy3_warrior045.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/warrior/guy3_warrior046.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/warrior/guy3_warrior047.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/warrior/guy3_warrior048.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/warrior/guy3_warrior050.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/warrior/guy3_warrior051.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/warrior/guy3_warrior052.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/warrior/guy3_warrior053.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/warrior/guy3_warrior054.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/warrior/guy3_warrior055.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/warrior/guy3_warrior056.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/warrior/guy3_warrior059.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/warrior/guy3_warrior060.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/warrior/guy3_warrior062.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/warrior/guy3_warrior063.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/warrior/guy3_warrior065.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/warrior/guy3_warrior066.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/warrior/guy3_warrior068.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/warrior/guy3_warrior070.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/warrior/guy3_warrior071.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/warrior/guy3_warrior072.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/warrior/guy3_warrior073.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/warrior/guy3_warrior074.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/warrior/guy3_warrior075.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/warrior/guy3_warrior076.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/warrior/guy3_warrior077.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/warrior/guy3_warrior079.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/warrior/guy3_warrior080.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/warrior/guy3_warrior081.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/warrior/guy3_warrior082.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/warrior/guy3_warrior083.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/warrior/guy3_warrior084.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/warrior/guy3_warrior085.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/warrior/guy3_warrior086.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/warrior/guy3_warrior087.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/warrior/guy3_warrior088.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/warrior/guy3_warrior089.jpg. No pose was confidentlly detected. Skipped yoga_cg/test/warrior/guy3_warrior137.jpg. No pose was confidentlly detected.
Part 2: Train a pose classification model that takes the landmark coordinates as input, and output the predicted labels.
You'll build a TensorFlow model that takes the landmark coordinates and predicts the pose class that the person in the input image performs. The model consists of two submodels:
- Submodel 1 calculates a pose embedding (a.k.a feature vector) from the detected landmark coordinates.
- Submodel 2 feeds pose embedding through several
Dense
layer to predict the pose class.
You'll then train the model based on the dataset that were preprocessed in part 1.
(Optional) Download the preprocessed dataset if you didn't run part 1
# Download the preprocessed CSV files which are the same as the output of step 1
if is_skip_step_1:
!wget -O train_data.csv http://download.tensorflow.org/data/pose_classification/yoga_train_data.csv
!wget -O test_data.csv http://download.tensorflow.org/data/pose_classification/yoga_test_data.csv
csvs_out_train_path = 'train_data.csv'
csvs_out_test_path = 'test_data.csv'
is_skipped_step_1 = True
Load the preprocessed CSVs into TRAIN
and TEST
datasets.
def load_pose_landmarks(csv_path):
"""Loads a CSV created by MoveNetPreprocessor.
Returns:
X: Detected landmark coordinates and scores of shape (N, 17 * 3)
y: Ground truth labels of shape (N, label_count)
classes: The list of all class names found in the dataset
dataframe: The CSV loaded as a Pandas dataframe features (X) and ground
truth labels (y) to use later to train a pose classification model.
"""
# Load the CSV file
dataframe = pd.read_csv(csv_path)
df_to_process = dataframe.copy()
# Drop the file_name columns as you don't need it during training.
df_to_process.drop(columns=['file_name'], inplace=True)
# Extract the list of class names
classes = df_to_process.pop('class_name').unique()
# Extract the labels
y = df_to_process.pop('class_no')
# Convert the input features and labels into the correct format for training.
X = df_to_process.astype('float64')
y = keras.utils.to_categorical(y)
return X, y, classes, dataframe
Load and split the original TRAIN
dataset into TRAIN
(85% of the data) and VALIDATE
(the remaining 15%).
# Load the train data
X, y, class_names, _ = load_pose_landmarks(csvs_out_train_path)
# Split training data (X, y) into (X_train, y_train) and (X_val, y_val)
X_train, X_val, y_train, y_val = train_test_split(X, y,
test_size=0.15)
# Load the test data
X_test, y_test, _, df_test = load_pose_landmarks(csvs_out_test_path)
Define functions to convert the pose landmarks to a pose embedding (a.k.a. feature vector) for pose classification
Next, convert the landmark coordinates to a feature vector by:
- Moving the pose center to the origin.
- Scaling the pose so that the pose size becomes 1
- Flattening these coordinates into a feature vector
Then use this feature vector to train a neural-network based pose classifier.
def get_center_point(landmarks, left_bodypart, right_bodypart):
"""Calculates the center point of the two given landmarks."""
left = tf.gather(landmarks, left_bodypart.value, axis=1)
right = tf.gather(landmarks, right_bodypart.value, axis=1)
center = left * 0.5 + right * 0.5
return center
def get_pose_size(landmarks, torso_size_multiplier=2.5):
"""Calculates pose size.
It is the maximum of two values:
* Torso size multiplied by `torso_size_multiplier`
* Maximum distance from pose center to any pose landmark
"""
# Hips center
hips_center = get_center_point(landmarks, BodyPart.LEFT_HIP,
BodyPart.RIGHT_HIP)
# Shoulders center
shoulders_center = get_center_point(landmarks, BodyPart.LEFT_SHOULDER,
BodyPart.RIGHT_SHOULDER)
# Torso size as the minimum body size
torso_size = tf.linalg.norm(shoulders_center - hips_center)
# Pose center
pose_center_new = get_center_point(landmarks, BodyPart.LEFT_HIP,
BodyPart.RIGHT_HIP)
pose_center_new = tf.expand_dims(pose_center_new, axis=1)
# Broadcast the pose center to the same size as the landmark vector to
# perform substraction
pose_center_new = tf.broadcast_to(pose_center_new,
[tf.size(landmarks) // (17*2), 17, 2])
# Dist to pose center
d = tf.gather(landmarks - pose_center_new, 0, axis=0,
name="dist_to_pose_center")
# Max dist to pose center
max_dist = tf.reduce_max(tf.linalg.norm(d, axis=0))
# Normalize scale
pose_size = tf.maximum(torso_size * torso_size_multiplier, max_dist)
return pose_size
def normalize_pose_landmarks(landmarks):
"""Normalizes the landmarks translation by moving the pose center to (0,0) and
scaling it to a constant pose size.
"""
# Move landmarks so that the pose center becomes (0,0)
pose_center = get_center_point(landmarks, BodyPart.LEFT_HIP,
BodyPart.RIGHT_HIP)
pose_center = tf.expand_dims(pose_center, axis=1)
# Broadcast the pose center to the same size as the landmark vector to perform
# substraction
pose_center = tf.broadcast_to(pose_center,
[tf.size(landmarks) // (17*2), 17, 2])
landmarks = landmarks - pose_center
# Scale the landmarks to a constant pose size
pose_size = get_pose_size(landmarks)
landmarks /= pose_size
return landmarks
def landmarks_to_embedding(landmarks_and_scores):
"""Converts the input landmarks into a pose embedding."""
# Reshape the flat input into a matrix with shape=(17, 3)
reshaped_inputs = keras.layers.Reshape((17, 3))(landmarks_and_scores)
# Normalize landmarks 2D
landmarks = normalize_pose_landmarks(reshaped_inputs[:, :, :2])
# Flatten the normalized landmark coordinates into a vector
embedding = keras.layers.Flatten()(landmarks)
return embedding
Define a Keras model for pose classification
Our Keras model takes the detected pose landmarks, then calculates the pose embedding and predicts the pose class.
# Define the model
inputs = tf.keras.Input(shape=(51))
embedding = landmarks_to_embedding(inputs)
layer = keras.layers.Dense(128, activation=tf.nn.relu6)(embedding)
layer = keras.layers.Dropout(0.5)(layer)
layer = keras.layers.Dense(64, activation=tf.nn.relu6)(layer)
layer = keras.layers.Dropout(0.5)(layer)
outputs = keras.layers.Dense(len(class_names), activation="softmax")(layer)
model = keras.Model(inputs, outputs)
model.summary()
Model: "model" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_1 (InputLayer) [(None, 51)] 0 [] reshape (Reshape) (None, 17, 3) 0 ['input_1[0][0]'] tf.__operators__.getitem (Slic (None, 17, 2) 0 ['reshape[0][0]'] ingOpLambda) tf.compat.v1.gather (TFOpLambd (None, 2) 0 ['tf.__operators__.getitem[0][0]' a) ] tf.compat.v1.gather_1 (TFOpLam (None, 2) 0 ['tf.__operators__.getitem[0][0]' bda) ] tf.math.multiply (TFOpLambda) (None, 2) 0 ['tf.compat.v1.gather[0][0]'] tf.math.multiply_1 (TFOpLambda (None, 2) 0 ['tf.compat.v1.gather_1[0][0]'] ) tf.__operators__.add (TFOpLamb (None, 2) 0 ['tf.math.multiply[0][0]', da) 'tf.math.multiply_1[0][0]'] tf.compat.v1.size (TFOpLambda) () 0 ['tf.__operators__.getitem[0][0]' ] tf.expand_dims (TFOpLambda) (None, 1, 2) 0 ['tf.__operators__.add[0][0]'] tf.compat.v1.floor_div (TFOpLa () 0 ['tf.compat.v1.size[0][0]'] mbda) tf.broadcast_to (TFOpLambda) (None, 17, 2) 0 ['tf.expand_dims[0][0]', 'tf.compat.v1.floor_div[0][0]'] tf.math.subtract (TFOpLambda) (None, 17, 2) 0 ['tf.__operators__.getitem[0][0]' , 'tf.broadcast_to[0][0]'] tf.compat.v1.gather_6 (TFOpLam (None, 2) 0 ['tf.math.subtract[0][0]'] bda) tf.compat.v1.gather_7 (TFOpLam (None, 2) 0 ['tf.math.subtract[0][0]'] bda) tf.math.multiply_6 (TFOpLambda (None, 2) 0 ['tf.compat.v1.gather_6[0][0]'] ) tf.math.multiply_7 (TFOpLambda (None, 2) 0 ['tf.compat.v1.gather_7[0][0]'] ) tf.__operators__.add_3 (TFOpLa (None, 2) 0 ['tf.math.multiply_6[0][0]', mbda) 'tf.math.multiply_7[0][0]'] tf.compat.v1.size_1 (TFOpLambd () 0 ['tf.math.subtract[0][0]'] a) tf.compat.v1.gather_4 (TFOpLam (None, 2) 0 ['tf.math.subtract[0][0]'] bda) tf.compat.v1.gather_5 (TFOpLam (None, 2) 0 ['tf.math.subtract[0][0]'] bda) tf.compat.v1.gather_2 (TFOpLam (None, 2) 0 ['tf.math.subtract[0][0]'] bda) tf.compat.v1.gather_3 (TFOpLam (None, 2) 0 ['tf.math.subtract[0][0]'] bda) tf.expand_dims_1 (TFOpLambda) (None, 1, 2) 0 ['tf.__operators__.add_3[0][0]'] tf.compat.v1.floor_div_1 (TFOp () 0 ['tf.compat.v1.size_1[0][0]'] Lambda) tf.math.multiply_4 (TFOpLambda (None, 2) 0 ['tf.compat.v1.gather_4[0][0]'] ) tf.math.multiply_5 (TFOpLambda (None, 2) 0 ['tf.compat.v1.gather_5[0][0]'] ) tf.math.multiply_2 (TFOpLambda (None, 2) 0 ['tf.compat.v1.gather_2[0][0]'] ) tf.math.multiply_3 (TFOpLambda (None, 2) 0 ['tf.compat.v1.gather_3[0][0]'] ) tf.broadcast_to_1 (TFOpLambda) (None, 17, 2) 0 ['tf.expand_dims_1[0][0]', 'tf.compat.v1.floor_div_1[0][0]' ] tf.__operators__.add_2 (TFOpLa (None, 2) 0 ['tf.math.multiply_4[0][0]', mbda) 'tf.math.multiply_5[0][0]'] tf.__operators__.add_1 (TFOpLa (None, 2) 0 ['tf.math.multiply_2[0][0]', mbda) 'tf.math.multiply_3[0][0]'] tf.math.subtract_2 (TFOpLambda (None, 17, 2) 0 ['tf.math.subtract[0][0]', ) 'tf.broadcast_to_1[0][0]'] tf.math.subtract_1 (TFOpLambda (None, 2) 0 ['tf.__operators__.add_2[0][0]', ) 'tf.__operators__.add_1[0][0]'] tf.compat.v1.gather_8 (TFOpLam (17, 2) 0 ['tf.math.subtract_2[0][0]'] bda) tf.compat.v1.norm (TFOpLambda) () 0 ['tf.math.subtract_1[0][0]'] tf.compat.v1.norm_1 (TFOpLambd (2,) 0 ['tf.compat.v1.gather_8[0][0]'] a) tf.math.multiply_8 (TFOpLambda () 0 ['tf.compat.v1.norm[0][0]'] ) tf.math.reduce_max (TFOpLambda () 0 ['tf.compat.v1.norm_1[0][0]'] ) tf.math.maximum (TFOpLambda) () 0 ['tf.math.multiply_8[0][0]', 'tf.math.reduce_max[0][0]'] tf.math.truediv (TFOpLambda) (None, 17, 2) 0 ['tf.math.subtract[0][0]', 'tf.math.maximum[0][0]'] flatten (Flatten) (None, 34) 0 ['tf.math.truediv[0][0]'] dense (Dense) (None, 128) 4480 ['flatten[0][0]'] dropout (Dropout) (None, 128) 0 ['dense[0][0]'] dense_1 (Dense) (None, 64) 8256 ['dropout[0][0]'] dropout_1 (Dropout) (None, 64) 0 ['dense_1[0][0]'] dense_2 (Dense) (None, 5) 325 ['dropout_1[0][0]'] ================================================================================================== Total params: 13,061 Trainable params: 13,061 Non-trainable params: 0 __________________________________________________________________________________________________
model.compile(
optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy']
)
# Add a checkpoint callback to store the checkpoint that has the highest
# validation accuracy.
checkpoint_path = "weights.best.hdf5"
checkpoint = keras.callbacks.ModelCheckpoint(checkpoint_path,
monitor='val_accuracy',
verbose=1,
save_best_only=True,
mode='max')
earlystopping = keras.callbacks.EarlyStopping(monitor='val_accuracy',
patience=20)
# Start training
history = model.fit(X_train, y_train,
epochs=200,
batch_size=16,
validation_data=(X_val, y_val),
callbacks=[checkpoint, earlystopping])
Epoch 1/200 37/37 [==============================] - ETA: 0s - loss: 1.5338 - accuracy: 0.3702 Epoch 1: val_accuracy improved from -inf to 0.46078, saving model to weights.best.hdf5 37/37 [==============================] - 4s 12ms/step - loss: 1.5338 - accuracy: 0.3702 - val_loss: 1.3954 - val_accuracy: 0.4608 Epoch 2/200 20/37 [===============>..............] - ETA: 0s - loss: 1.3273 - accuracy: 0.5000 Epoch 2: val_accuracy improved from 0.46078 to 0.64706, saving model to weights.best.hdf5 37/37 [==============================] - 0s 5ms/step - loss: 1.2609 - accuracy: 0.4896 - val_loss: 1.0494 - val_accuracy: 0.6471 Epoch 3/200 20/37 [===============>..............] - ETA: 0s - loss: 1.0551 - accuracy: 0.5469 Epoch 3: val_accuracy improved from 0.64706 to 0.72549, saving model to weights.best.hdf5 37/37 [==============================] - 0s 5ms/step - loss: 1.0066 - accuracy: 0.5675 - val_loss: 0.8727 - val_accuracy: 0.7255 Epoch 4/200 20/37 [===============>..............] - ETA: 0s - loss: 0.8850 - accuracy: 0.5844 Epoch 4: val_accuracy improved from 0.72549 to 0.73529, saving model to weights.best.hdf5 37/37 [==============================] - 0s 5ms/step - loss: 0.8592 - accuracy: 0.6211 - val_loss: 0.7408 - val_accuracy: 0.7353 Epoch 5/200 37/37 [==============================] - ETA: 0s - loss: 0.7872 - accuracy: 0.6730 Epoch 5: val_accuracy improved from 0.73529 to 0.75490, saving model to weights.best.hdf5 37/37 [==============================] - 0s 5ms/step - loss: 0.7872 - accuracy: 0.6730 - val_loss: 0.6484 - val_accuracy: 0.7549 Epoch 6/200 20/37 [===============>..............] - ETA: 0s - loss: 0.7569 - accuracy: 0.7125 Epoch 6: val_accuracy did not improve from 0.75490 37/37 [==============================] - 0s 4ms/step - loss: 0.7268 - accuracy: 0.7042 - val_loss: 0.5973 - val_accuracy: 0.7353 Epoch 7/200 20/37 [===============>..............] - ETA: 0s - loss: 0.6100 - accuracy: 0.7531 Epoch 7: val_accuracy improved from 0.75490 to 0.81373, saving model to weights.best.hdf5 37/37 [==============================] - 0s 5ms/step - loss: 0.6341 - accuracy: 0.7526 - val_loss: 0.5191 - val_accuracy: 0.8137 Epoch 8/200 21/37 [================>.............] - ETA: 0s - loss: 0.6065 - accuracy: 0.7798 Epoch 8: val_accuracy improved from 0.81373 to 0.83333, saving model to weights.best.hdf5 37/37 [==============================] - 0s 4ms/step - loss: 0.5916 - accuracy: 0.7734 - val_loss: 0.4880 - val_accuracy: 0.8333 Epoch 9/200 20/37 [===============>..............] - ETA: 0s - loss: 0.5720 - accuracy: 0.8156 Epoch 9: val_accuracy improved from 0.83333 to 0.86275, saving model to weights.best.hdf5 37/37 [==============================] - 0s 5ms/step - loss: 0.5482 - accuracy: 0.8114 - val_loss: 0.4396 - val_accuracy: 0.8627 Epoch 10/200 20/37 [===============>..............] - ETA: 0s - loss: 0.5046 - accuracy: 0.8000 Epoch 10: val_accuracy improved from 0.86275 to 0.89216, saving model to weights.best.hdf5 37/37 [==============================] - 0s 5ms/step - loss: 0.5126 - accuracy: 0.8062 - val_loss: 0.3829 - val_accuracy: 0.8922 Epoch 11/200 37/37 [==============================] - ETA: 0s - loss: 0.4710 - accuracy: 0.8253 Epoch 11: val_accuracy improved from 0.89216 to 0.90196, saving model to weights.best.hdf5 37/37 [==============================] - 0s 5ms/step - loss: 0.4710 - accuracy: 0.8253 - val_loss: 0.3636 - val_accuracy: 0.9020 Epoch 12/200 20/37 [===============>..............] - ETA: 0s - loss: 0.4332 - accuracy: 0.8406 Epoch 12: val_accuracy did not improve from 0.90196 37/37 [==============================] - 0s 4ms/step - loss: 0.4498 - accuracy: 0.8356 - val_loss: 0.3243 - val_accuracy: 0.9020 Epoch 13/200 19/37 [==============>...............] - ETA: 0s - loss: 0.4003 - accuracy: 0.8750 Epoch 13: val_accuracy improved from 0.90196 to 0.91176, saving model to weights.best.hdf5 37/37 [==============================] - 0s 5ms/step - loss: 0.4055 - accuracy: 0.8668 - val_loss: 0.3064 - val_accuracy: 0.9118 Epoch 14/200 37/37 [==============================] - ETA: 0s - loss: 0.3754 - accuracy: 0.8875 Epoch 14: val_accuracy improved from 0.91176 to 0.92157, saving model to weights.best.hdf5 37/37 [==============================] - 0s 5ms/step - loss: 0.3754 - accuracy: 0.8875 - val_loss: 0.2745 - val_accuracy: 0.9216 Epoch 15/200 20/37 [===============>..............] - ETA: 0s - loss: 0.3768 - accuracy: 0.8875 Epoch 15: val_accuracy improved from 0.92157 to 0.93137, saving model to weights.best.hdf5 37/37 [==============================] - 0s 5ms/step - loss: 0.3603 - accuracy: 0.8962 - val_loss: 0.2443 - val_accuracy: 0.9314 Epoch 16/200 20/37 [===============>..............] - ETA: 0s - loss: 0.3335 - accuracy: 0.9062 Epoch 16: val_accuracy improved from 0.93137 to 0.94118, saving model to weights.best.hdf5 37/37 [==============================] - 0s 5ms/step - loss: 0.3260 - accuracy: 0.8979 - val_loss: 0.2209 - val_accuracy: 0.9412 Epoch 17/200 20/37 [===============>..............] - ETA: 0s - loss: 0.3244 - accuracy: 0.8969 Epoch 17: val_accuracy did not improve from 0.94118 37/37 [==============================] - 0s 4ms/step - loss: 0.3025 - accuracy: 0.9118 - val_loss: 0.2090 - val_accuracy: 0.9314 Epoch 18/200 20/37 [===============>..............] - ETA: 0s - loss: 0.2624 - accuracy: 0.9406 Epoch 18: val_accuracy did not improve from 0.94118 37/37 [==============================] - 0s 4ms/step - loss: 0.2722 - accuracy: 0.9256 - val_loss: 0.1955 - val_accuracy: 0.9412 Epoch 19/200 20/37 [===============>..............] - ETA: 0s - loss: 0.2863 - accuracy: 0.9250 Epoch 19: val_accuracy did not improve from 0.94118 37/37 [==============================] - 0s 4ms/step - loss: 0.2826 - accuracy: 0.9170 - val_loss: 0.1740 - val_accuracy: 0.9314 Epoch 20/200 20/37 [===============>..............] - ETA: 0s - loss: 0.2709 - accuracy: 0.9312 Epoch 20: val_accuracy did not improve from 0.94118 37/37 [==============================] - 0s 4ms/step - loss: 0.2456 - accuracy: 0.9394 - val_loss: 0.1668 - val_accuracy: 0.9412 Epoch 21/200 20/37 [===============>..............] - ETA: 0s - loss: 0.2525 - accuracy: 0.9219 Epoch 21: val_accuracy did not improve from 0.94118 37/37 [==============================] - 0s 4ms/step - loss: 0.2645 - accuracy: 0.9152 - val_loss: 0.1576 - val_accuracy: 0.9412 Epoch 22/200 21/37 [================>.............] - ETA: 0s - loss: 0.2160 - accuracy: 0.9226 Epoch 22: val_accuracy did not improve from 0.94118 37/37 [==============================] - 0s 4ms/step - loss: 0.2090 - accuracy: 0.9343 - val_loss: 0.1397 - val_accuracy: 0.9412 Epoch 23/200 20/37 [===============>..............] - ETA: 0s - loss: 0.2228 - accuracy: 0.9344 Epoch 23: val_accuracy improved from 0.94118 to 0.95098, saving model to weights.best.hdf5 37/37 [==============================] - 0s 5ms/step - loss: 0.2069 - accuracy: 0.9377 - val_loss: 0.1330 - val_accuracy: 0.9510 Epoch 24/200 20/37 [===============>..............] - ETA: 0s - loss: 0.1926 - accuracy: 0.9438 Epoch 24: val_accuracy did not improve from 0.95098 37/37 [==============================] - 0s 4ms/step - loss: 0.1933 - accuracy: 0.9394 - val_loss: 0.1276 - val_accuracy: 0.9510 Epoch 25/200 20/37 [===============>..............] - ETA: 0s - loss: 0.1678 - accuracy: 0.9500 Epoch 25: val_accuracy improved from 0.95098 to 0.97059, saving model to weights.best.hdf5 37/37 [==============================] - 0s 5ms/step - loss: 0.1782 - accuracy: 0.9533 - val_loss: 0.1211 - val_accuracy: 0.9706 Epoch 26/200 20/37 [===============>..............] - ETA: 0s - loss: 0.1940 - accuracy: 0.9406 Epoch 26: val_accuracy did not improve from 0.97059 37/37 [==============================] - 0s 4ms/step - loss: 0.1844 - accuracy: 0.9412 - val_loss: 0.1162 - val_accuracy: 0.9608 Epoch 27/200 21/37 [================>.............] - ETA: 0s - loss: 0.1841 - accuracy: 0.9405 Epoch 27: val_accuracy did not improve from 0.97059 37/37 [==============================] - 0s 4ms/step - loss: 0.1702 - accuracy: 0.9498 - val_loss: 0.1155 - val_accuracy: 0.9510 Epoch 28/200 20/37 [===============>..............] - ETA: 0s - loss: 0.1469 - accuracy: 0.9656 Epoch 28: val_accuracy did not improve from 0.97059 37/37 [==============================] - 0s 4ms/step - loss: 0.1641 - accuracy: 0.9550 - val_loss: 0.1067 - val_accuracy: 0.9608 Epoch 29/200 20/37 [===============>..............] - ETA: 0s - loss: 0.2199 - accuracy: 0.9312 Epoch 29: val_accuracy did not improve from 0.97059 37/37 [==============================] - 0s 4ms/step - loss: 0.1702 - accuracy: 0.9533 - val_loss: 0.1032 - val_accuracy: 0.9608 Epoch 30/200 20/37 [===============>..............] - ETA: 0s - loss: 0.1503 - accuracy: 0.9531 Epoch 30: val_accuracy did not improve from 0.97059 37/37 [==============================] - 0s 4ms/step - loss: 0.1555 - accuracy: 0.9516 - val_loss: 0.0993 - val_accuracy: 0.9510 Epoch 31/200 20/37 [===============>..............] - ETA: 0s - loss: 0.1240 - accuracy: 0.9594 Epoch 31: val_accuracy did not improve from 0.97059 37/37 [==============================] - 0s 4ms/step - loss: 0.1443 - accuracy: 0.9585 - val_loss: 0.0948 - val_accuracy: 0.9608 Epoch 32/200 20/37 [===============>..............] - ETA: 0s - loss: 0.1466 - accuracy: 0.9469 Epoch 32: val_accuracy did not improve from 0.97059 37/37 [==============================] - 0s 4ms/step - loss: 0.1775 - accuracy: 0.9412 - val_loss: 0.0918 - val_accuracy: 0.9608 Epoch 33/200 20/37 [===============>..............] - ETA: 0s - loss: 0.1502 - accuracy: 0.9688 Epoch 33: val_accuracy did not improve from 0.97059 37/37 [==============================] - 0s 4ms/step - loss: 0.1487 - accuracy: 0.9585 - val_loss: 0.0906 - val_accuracy: 0.9706 Epoch 34/200 20/37 [===============>..............] - ETA: 0s - loss: 0.1378 - accuracy: 0.9469 Epoch 34: val_accuracy did not improve from 0.97059 37/37 [==============================] - 0s 4ms/step - loss: 0.1281 - accuracy: 0.9567 - val_loss: 0.0829 - val_accuracy: 0.9608 Epoch 35/200 20/37 [===============>..............] - ETA: 0s - loss: 0.1242 - accuracy: 0.9688 Epoch 35: val_accuracy did not improve from 0.97059 37/37 [==============================] - 0s 4ms/step - loss: 0.1155 - accuracy: 0.9689 - val_loss: 0.0853 - val_accuracy: 0.9608 Epoch 36/200 20/37 [===============>..............] - ETA: 0s - loss: 0.1546 - accuracy: 0.9531 Epoch 36: val_accuracy did not improve from 0.97059 37/37 [==============================] - 0s 4ms/step - loss: 0.1386 - accuracy: 0.9567 - val_loss: 0.0752 - val_accuracy: 0.9608 Epoch 37/200 20/37 [===============>..............] - ETA: 0s - loss: 0.1351 - accuracy: 0.9625 Epoch 37: val_accuracy did not improve from 0.97059 37/37 [==============================] - 0s 4ms/step - loss: 0.1223 - accuracy: 0.9689 - val_loss: 0.0763 - val_accuracy: 0.9608 Epoch 38/200 20/37 [===============>..............] - ETA: 0s - loss: 0.1138 - accuracy: 0.9719 Epoch 38: val_accuracy did not improve from 0.97059 37/37 [==============================] - 0s 4ms/step - loss: 0.1159 - accuracy: 0.9671 - val_loss: 0.0753 - val_accuracy: 0.9608 Epoch 39/200 20/37 [===============>..............] - ETA: 0s - loss: 0.1013 - accuracy: 0.9719 Epoch 39: val_accuracy did not improve from 0.97059 37/37 [==============================] - 0s 4ms/step - loss: 0.0953 - accuracy: 0.9723 - val_loss: 0.0757 - val_accuracy: 0.9608 Epoch 40/200 20/37 [===============>..............] - ETA: 0s - loss: 0.1112 - accuracy: 0.9750 Epoch 40: val_accuracy did not improve from 0.97059 37/37 [==============================] - 0s 4ms/step - loss: 0.1152 - accuracy: 0.9671 - val_loss: 0.0732 - val_accuracy: 0.9608 Epoch 41/200 20/37 [===============>..............] - ETA: 0s - loss: 0.1350 - accuracy: 0.9531 Epoch 41: val_accuracy did not improve from 0.97059 37/37 [==============================] - 0s 4ms/step - loss: 0.1231 - accuracy: 0.9567 - val_loss: 0.0696 - val_accuracy: 0.9706 Epoch 42/200 20/37 [===============>..............] - ETA: 0s - loss: 0.1109 - accuracy: 0.9656 Epoch 42: val_accuracy did not improve from 0.97059 37/37 [==============================] - 0s 4ms/step - loss: 0.0917 - accuracy: 0.9740 - val_loss: 0.0718 - val_accuracy: 0.9706 Epoch 43/200 20/37 [===============>..............] - ETA: 0s - loss: 0.1172 - accuracy: 0.9531 Epoch 43: val_accuracy did not improve from 0.97059 37/37 [==============================] - 0s 4ms/step - loss: 0.1225 - accuracy: 0.9516 - val_loss: 0.0663 - val_accuracy: 0.9706 Epoch 44/200 21/37 [================>.............] - ETA: 0s - loss: 0.0960 - accuracy: 0.9673 Epoch 44: val_accuracy did not improve from 0.97059 37/37 [==============================] - 0s 4ms/step - loss: 0.1004 - accuracy: 0.9619 - val_loss: 0.0653 - val_accuracy: 0.9706 Epoch 45/200 20/37 [===============>..............] - ETA: 0s - loss: 0.1039 - accuracy: 0.9719 Epoch 45: val_accuracy did not improve from 0.97059 37/37 [==============================] - 0s 4ms/step - loss: 0.1053 - accuracy: 0.9740 - val_loss: 0.0673 - val_accuracy: 0.9706
# Visualize the training history to see whether you're overfitting.
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('Model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['TRAIN', 'VAL'], loc='lower right')
plt.show()
# Evaluate the model using the TEST dataset
loss, accuracy = model.evaluate(X_test, y_test)
14/14 [==============================] - 0s 2ms/step - loss: 0.0424 - accuracy: 0.9906
Draw the confusion matrix to better understand the model performance
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""Plots the confusion matrix."""
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=55)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.tight_layout()
# Classify pose in the TEST dataset using the trained model
y_pred = model.predict(X_test)
# Convert the prediction result to class name
y_pred_label = [class_names[i] for i in np.argmax(y_pred, axis=1)]
y_true_label = [class_names[i] for i in np.argmax(y_test, axis=1)]
# Plot the confusion matrix
cm = confusion_matrix(np.argmax(y_test, axis=1), np.argmax(y_pred, axis=1))
plot_confusion_matrix(cm,
class_names,
title ='Confusion Matrix of Pose Classification Model')
# Print the classification report
print('\nClassification Report:\n', classification_report(y_true_label,
y_pred_label))
14/14 [==============================] - 0s 2ms/step Confusion matrix, without normalization Classification Report: precision recall f1-score support chair 0.98 1.00 0.99 84 cobra 0.99 1.00 0.99 93 dog 0.99 1.00 0.99 84 tree 1.00 1.00 1.00 96 warrior 1.00 0.94 0.97 68 accuracy 0.99 425 macro avg 0.99 0.99 0.99 425 weighted avg 0.99 0.99 0.99 425
(Optional) Investigate incorrect predictions
You can look at the poses from the TEST
dataset that were incorrectly predicted to see whether the model accuracy can be improved.
if is_skip_step_1:
raise RuntimeError('You must have run step 1 to run this cell.')
# If step 1 was skipped, skip this step.
IMAGE_PER_ROW = 3
MAX_NO_OF_IMAGE_TO_PLOT = 30
# Extract the list of incorrectly predicted poses
false_predict = [id_in_df for id_in_df in range(len(y_test)) \
if y_pred_label[id_in_df] != y_true_label[id_in_df]]
if len(false_predict) > MAX_NO_OF_IMAGE_TO_PLOT:
false_predict = false_predict[:MAX_NO_OF_IMAGE_TO_PLOT]
# Plot the incorrectly predicted images
row_count = len(false_predict) // IMAGE_PER_ROW + 1
fig = plt.figure(figsize=(10 * IMAGE_PER_ROW, 10 * row_count))
for i, id_in_df in enumerate(false_predict):
ax = fig.add_subplot(row_count, IMAGE_PER_ROW, i + 1)
image_path = os.path.join(images_out_test_folder,
df_test.iloc[id_in_df]['file_name'])
image = cv2.imread(image_path)
plt.title("Predict: %s; Actual: %s"
% (y_pred_label[id_in_df], y_true_label[id_in_df]))
plt.imshow(cv2.cvtColor(image, cv2.COLOR_BGR2RGB))
plt.show()
Part 3: Convert the pose classification model to TensorFlow Lite
You'll convert the Keras pose classification model to the TensorFlow Lite format so that you can deploy it to mobile apps, web browsers and edge devices. When converting the model, you'll apply dynamic range quantization to reduce the pose classification TensorFlow Lite model size by about 4 times with insignificant accuracy loss.
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_model = converter.convert()
print('Model size: %dKB' % (len(tflite_model) / 1024))
with open('pose_classifier.tflite', 'wb') as f:
f.write(tflite_model)
INFO:tensorflow:Assets written to: /tmpfs/tmp/tmpxni9vzjh/assets Model size: 26KB 2022-10-20 11:14:16.614189: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:362] Ignored output_format. 2022-10-20 11:14:16.614228: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:365] Ignored drop_control_dependency.
Then you'll write the label file which contains mapping from the class indexes to the human readable class names.
with open('pose_labels.txt', 'w') as f:
f.write('\n'.join(class_names))
As you've applied quantization to reduce the model size, let's evaluate the quantized TFLite model to check whether the accuracy drop is acceptable.
def evaluate_model(interpreter, X, y_true):
"""Evaluates the given TFLite model and return its accuracy."""
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
# Run predictions on all given poses.
y_pred = []
for i in range(len(y_true)):
# Pre-processing: add batch dimension and convert to float32 to match with
# the model's input data format.
test_image = X[i: i + 1].astype('float32')
interpreter.set_tensor(input_index, test_image)
# Run inference.
interpreter.invoke()
# Post-processing: remove batch dimension and find the class with highest
# probability.
output = interpreter.tensor(output_index)
predicted_label = np.argmax(output()[0])
y_pred.append(predicted_label)
# Compare prediction results with ground truth labels to calculate accuracy.
y_pred = keras.utils.to_categorical(y_pred)
return accuracy_score(y_true, y_pred)
# Evaluate the accuracy of the converted TFLite model
classifier_interpreter = tf.lite.Interpreter(model_content=tflite_model)
classifier_interpreter.allocate_tensors()
print('Accuracy of TFLite model: %s' %
evaluate_model(classifier_interpreter, X_test, y_test))
Accuracy of TFLite model: 0.9905882352941177
Now you can download the TFLite model (pose_classifier.tflite
) and the label file (pose_labels.txt
) to classify custom poses. See the Android and Python/Raspberry Pi sample app for an end-to-end example of how to use the TFLite pose classification model.
zip pose_classifier.zip pose_labels.txt pose_classifier.tflite
adding: pose_labels.txt (stored 0%) adding: pose_classifier.tflite (deflated 34%)
# Download the zip archive if running on Colab.
try:
from google.colab import files
files.download('pose_classifier.zip')
except:
pass