การจำแนกท่าทางของมนุษย์ด้วย MoveNet และ TensorFlow Lite

สมุดบันทึกนี้จะสอนวิธีฝึกแบบจำลองการจัดประเภทท่าโดยใช้ MoveNet และ TensorFlow Lite ผลลัพธ์ที่ได้คือโมเดล TensorFlow Lite ใหม่ที่รับเอาต์พุตจากโมเดล MoveNet เป็นอินพุต และส่งออกการจัดประเภทท่า เช่น ชื่อของท่าโยคะ

ขั้นตอนในสมุดบันทึกนี้ประกอบด้วย 3 ส่วน:

  • ส่วนที่ 1: ประมวลผลข้อมูลการฝึกการจัดประเภทโพสล่วงหน้าเป็นไฟล์ CSV ที่ระบุจุดสังเกต (จุดสำคัญของเนื้อหา) ที่ตรวจพบโดยโมเดล MoveNet พร้อมกับป้ายกำกับการระบุตำแหน่งจริง
  • ส่วนที่ 2: สร้างและฝึกแบบจำลองการจัดประเภทท่าที่ใช้พิกัดจุดสังเกตจากไฟล์ CSV เป็นอินพุต และส่งออกป้ายกำกับที่คาดคะเน
  • ส่วนที่ 3: แปลงรูปแบบการจัดประเภทท่าเป็น TFLite

ตามค่าเริ่มต้น สมุดบันทึกนี้ใช้ชุดข้อมูลรูปภาพที่มีท่าโยคะติดป้ายกำกับ แต่เรายังได้รวมส่วนในส่วนที่ 1 ซึ่งคุณสามารถอัปโหลดชุดข้อมูลภาพของคุณเอง

ดูบน TensorFlow.org ทำงานใน Google Colab ดูแหล่งที่มาบน GitHub ดาวน์โหลดโน๊ตบุ๊ค ดูรุ่น TF Hub

การตระเตรียม

ในส่วนนี้ คุณจะนำเข้าไลบรารีที่จำเป็นและกำหนดฟังก์ชันต่างๆ เพื่อประมวลผลภาพการฝึกอบรมล่วงหน้าเป็นไฟล์ CSV ที่มีพิกัดจุดสังเกตและป้ายกำกับความจริงพื้นฐาน

ไม่มีอะไรที่สังเกตได้เกิดขึ้นที่นี่ แต่คุณสามารถขยายเซลล์โค้ดที่ซ่อนอยู่เพื่อดูการใช้งานสำหรับฟังก์ชันบางอย่างที่เราจะเรียกในภายหลัง

หากคุณต้องการสร้างไฟล์ CSV โดยไม่ทราบรายละเอียดทั้งหมด เพียงเรียกใช้ส่วนนี้และไปยังส่วนที่ 1

pip install -q opencv-python
import csv
import cv2
import itertools
import numpy as np
import pandas as pd
import os
import sys
import tempfile
import tqdm

from matplotlib import pyplot as plt
from matplotlib.collections import LineCollection

import tensorflow as tf
import tensorflow_hub as hub
from tensorflow import keras

from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, classification_report, confusion_matrix

รหัสเพื่อเรียกใช้การประเมินท่าทางโดยใช้ MoveNet

ฟังก์ชันสำหรับเรียกใช้การประเมินท่าทางด้วย MoveNet

Cloning into 'examples'...
remote: Enumerating objects: 20141, done.[K
remote: Counting objects: 100% (1961/1961), done.[K
remote: Compressing objects: 100% (1055/1055), done.[K
remote: Total 20141 (delta 909), reused 1584 (delta 595), pack-reused 18180[K
Receiving objects: 100% (20141/20141), 33.15 MiB | 25.83 MiB/s, done.
Resolving deltas: 100% (11003/11003), done.

ฟังก์ชันเพื่อแสดงภาพผลการประเมินท่าทาง

รหัสสำหรับโหลดภาพ ตรวจจับจุดสังเกตและบันทึกเป็นไฟล์ CSV

(ไม่บังคับ) ข้อมูลโค้ดเพื่อลองใช้ตรรกะการประเมินท่าทางของ Movenet

--2021-12-21 12:07:36--  https://cdn.pixabay.com/photo/2017/03/03/17/30/yoga-2114512_960_720.jpg
Resolving cdn.pixabay.com (cdn.pixabay.com)... 104.18.20.183, 104.18.21.183, 2606:4700::6812:14b7, ...
Connecting to cdn.pixabay.com (cdn.pixabay.com)|104.18.20.183|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 28665 (28K) [image/jpeg]
Saving to: ‘/tmp/image.jpeg’

/tmp/image.jpeg     100%[===================>]  27.99K  --.-KB/s    in 0s      

2021-12-21 12:07:36 (111 MB/s) - ‘/tmp/image.jpeg’ saved [28665/28665]

png

ส่วนที่ 1: ประมวลผลภาพอินพุตล่วงหน้า

เพราะการป้อนข้อมูลสำหรับลักษณนามก่อให้เกิดของเราเป็นสถานที่สำคัญของการส่งออกจากแบบจำลอง MoveNet เราต้องสร้างชุดฝึกอบรมของเราโดยการเรียกใช้ภาพที่ผ่าน MoveNet แล้วจับข้อมูลทั้งหมดที่สถานที่สำคัญและป้ายชื่อจริงบดเป็นไฟล์ CSV

ชุดข้อมูลที่เราจัดเตรียมไว้ให้สำหรับบทช่วยสอนนี้คือชุดข้อมูลท่าโยคะที่สร้างโดย CG ประกอบด้วยรูปภาพของนางแบบที่สร้างด้วย CG หลายตัวที่ทำท่าโยคะ 5 ท่า ไดเรกทอรีนี้แบ่งออกแล้วเป็น train ชุดข้อมูลและ test ชุด

ดังนั้นในส่วนนี้เราจะดาวน์โหลดชุดโยคะและเรียกใช้ผ่าน MoveNet เพื่อให้เราสามารถจับภาพสถานที่สำคัญทั้งหมดลงในไฟล์ CSV ... แต่จะใช้เวลาประมาณ 15 นาทีในการกินอาหารชุดโยคะของเราที่จะ MoveNet และสร้างไฟล์ CSV นี้ . เพื่อเป็นทางเลือกให้คุณสามารถดาวน์โหลดไฟล์ CSV ที่มีอยู่ก่อนสำหรับชุดโยคะโดยการตั้งค่า is_skip_step_1 พารามิเตอร์ด้านล่างเพื่อทรู ด้วยวิธีนี้ คุณจะข้ามขั้นตอนนี้และดาวน์โหลดไฟล์ CSV เดิมที่จะสร้างขึ้นในขั้นตอนก่อนการประมวลผลนี้แทน

ในทางกลับกันถ้าคุณต้องการในการฝึกอบรมลักษณนามก่อให้เกิดกับชุดภาพของคุณเองคุณจะต้องอัปโหลดภาพของคุณและเรียกขั้นตอนนี้ก่อนการประมวลผล (ลา is_skip_step_1 เท็จ) ควรตามคำแนะนำด้านล่างเพื่ออัปโหลดชุดข้อมูลที่ก่อให้เกิดของคุณเอง

(ไม่บังคับ) อัปโหลดชุดข้อมูลท่าทางของคุณเอง

หากคุณต้องการฝึกตัวแยกประเภทท่าทางด้วยท่าทางที่มีป้ายกำกับของคุณเอง (สามารถเป็นท่าใดก็ได้ ไม่ใช่แค่ท่าโยคะ) ให้ทำตามขั้นตอนเหล่านี้:

  1. การตั้งค่าดังกล่าวข้างต้น use_custom_dataset ตัวเลือกที่แท้จริง

  2. เตรียมไฟล์เก็บถาวร (ZIP, TAR หรืออื่นๆ) ที่มีโฟลเดอร์ที่มีชุดข้อมูลรูปภาพของคุณ โฟลเดอร์ต้องมีรูปภาพที่เรียงโพสของคุณดังนี้

    หากคุณได้แล้วแยกชุดข้อมูลของคุณลงในรถไฟและทดสอบชุดแล้วชุด dataset_is_split เป็น True นั่นคือโฟลเดอร์รูปภาพของคุณต้องมีไดเร็กทอรี "train" และ "test" ดังนี้:

    yoga_poses/
    |__ train/
        |__ downdog/
            |______ 00000128.jpg
            |______ ...
    |__ test/
        |__ downdog/
            |______ 00000181.jpg
            |______ ...
    

    หรือถ้าชุดของคุณจะไม่แยกออกเลยแล้วตั้ง dataset_is_split เท็จและเราจะแยกมันขึ้นอยู่บนพื้นฐานของส่วนแยกที่ระบุ นั่นคือ โฟลเดอร์รูปภาพที่คุณอัปโหลดควรมีลักษณะดังนี้:

    yoga_poses/
    |__ downdog/
        |______ 00000128.jpg
        |______ 00000181.jpg
        |______ ...
    |__ goddess/
        |______ 00000243.jpg
        |______ 00000306.jpg
        |______ ...
    
  3. คลิกแท็บไฟล์ด้านซ้าย (ไอคอนโฟลเดอร์) และจากนั้นคลิกอัปโหลดเพื่อการจัดเก็บเซสชั่น (ไอคอนไฟล์)

  4. เลือกไฟล์เก็บถาวรของคุณและรอจนกว่าการอัปโหลดจะเสร็จสิ้นก่อนที่จะดำเนินการต่อ

  5. แก้ไขบล็อคโค้ดต่อไปนี้เพื่อระบุชื่อไฟล์เก็บถาวรและไดเร็กทอรีรูปภาพของคุณ (โดยค่าเริ่มต้น เราคาดว่าจะเป็นไฟล์ ZIP ดังนั้น คุณจะต้องแก้ไขส่วนนั้นด้วยหากไฟล์เก็บถาวรของคุณเป็นรูปแบบอื่น)

  6. ตอนนี้เรียกใช้ส่วนที่เหลือของโน้ตบุ๊ก

if use_custom_dataset:
  # ATTENTION:
  # You must edit these two lines to match your archive and images folder name:
  # !tar -xf YOUR_DATASET_ARCHIVE_NAME.tar
  !unzip -q YOUR_DATASET_ARCHIVE_NAME.zip
  dataset_in = 'YOUR_DATASET_DIR_NAME'

  # You can leave the rest alone:
  if not os.path.isdir(dataset_in):
    raise Exception("dataset_in is not a valid directory")
  if dataset_is_split:
    IMAGES_ROOT = dataset_in
  else:
    dataset_out = 'split_' + dataset_in
    split_into_train_test(dataset_in, dataset_out, test_split=0.2)
    IMAGES_ROOT = dataset_out

ดาวน์โหลดชุดข้อมูลโยคะ

if not is_skip_step_1 and not use_custom_dataset:
  !wget -O yoga_poses.zip http://download.tensorflow.org/data/pose_classification/yoga_poses.zip
  !unzip -q yoga_poses.zip -d yoga_cg
  IMAGES_ROOT = "yoga_cg"
--2021-12-21 12:07:46--  http://download.tensorflow.org/data/pose_classification/yoga_poses.zip
Resolving download.tensorflow.org (download.tensorflow.org)... 172.217.218.128, 2a00:1450:4013:c08::80
Connecting to download.tensorflow.org (download.tensorflow.org)|172.217.218.128|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 102517581 (98M) [application/zip]
Saving to: ‘yoga_poses.zip’

yoga_poses.zip      100%[===================>]  97.77M  76.7MB/s    in 1.3s    

2021-12-21 12:07:48 (76.7 MB/s) - ‘yoga_poses.zip’ saved [102517581/102517581]

preprocess TRAIN ชุด

if not is_skip_step_1:
  images_in_train_folder = os.path.join(IMAGES_ROOT, 'train')
  images_out_train_folder = 'poses_images_out_train'
  csvs_out_train_path = 'train_data.csv'

  preprocessor = MoveNetPreprocessor(
      images_in_folder=images_in_train_folder,
      images_out_folder=images_out_train_folder,
      csvs_out_path=csvs_out_train_path,
  )

  preprocessor.process(per_pose_class_limit=None)
Preprocessing chair
  0%|          | 0/200 [00:00<?, ?it/s]/tmpfs/src/tf_docs_env/lib/python3.7/site-packages/ipykernel_launcher.py:128: DeprecationWarning: `np.str` is a deprecated alias for the builtin `str`. To silence this warning, use `str` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.str_` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
100%|██████████| 200/200 [00:32<00:00,  6.10it/s]
Preprocessing cobra
100%|██████████| 200/200 [00:31<00:00,  6.27it/s]
Preprocessing dog
100%|██████████| 200/200 [00:32<00:00,  6.15it/s]
Preprocessing tree
100%|██████████| 200/200 [00:33<00:00,  6.00it/s]
Preprocessing warrior
100%|██████████| 200/200 [00:30<00:00,  6.54it/s]
Skipped yoga_cg/train/chair/girl3_chair091.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair092.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair093.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair094.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair096.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair097.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair099.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair100.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair104.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair106.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair110.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair114.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair115.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair118.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair122.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair123.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair124.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair125.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair131.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair132.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair133.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair134.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair136.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair138.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair139.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair142.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/guy2_chair089.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/guy2_chair136.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/guy2_chair140.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/guy2_chair143.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/guy2_chair144.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/guy2_chair145.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/guy2_chair146.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra026.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra029.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra030.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra038.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra040.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra041.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra048.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra050.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra051.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra055.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra059.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra061.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra068.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra070.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra081.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra087.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra088.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra089.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra090.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra091.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra092.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra093.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra094.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra096.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra099.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra102.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra110.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra112.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra115.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra119.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra122.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra128.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra129.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra136.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra140.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl2_cobra029.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl2_cobra046.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl2_cobra050.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl2_cobra053.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl2_cobra108.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl2_cobra117.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl2_cobra129.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl2_cobra133.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl2_cobra136.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl2_cobra140.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra028.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra030.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra032.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra039.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra040.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra051.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra052.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra058.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra062.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra068.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra072.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra076.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra078.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra079.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra082.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra088.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra092.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra097.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra099.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra107.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra129.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra130.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra132.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra134.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra138.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra034.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra042.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra043.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra047.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra053.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra065.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra077.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra078.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra080.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra081.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra084.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra088.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra089.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra102.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra105.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra108.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra139.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl1_dog027.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl1_dog028.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl1_dog030.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl1_dog032.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog075.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog080.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog083.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog085.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog087.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog088.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog090.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog091.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog093.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog094.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog095.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog099.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog100.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog101.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog103.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog104.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog105.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog107.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog111.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog025.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog026.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog027.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog028.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog031.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog033.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog035.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog037.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog040.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog041.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog047.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog052.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog062.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog072.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog074.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog075.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog077.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog081.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog082.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog086.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog088.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog090.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog092.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog094.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog095.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog096.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog100.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog102.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog103.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog104.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog106.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog107.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog111.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/guy1_dog070.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/guy1_dog076.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/guy2_dog070.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/guy2_dog071.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/guy2_dog082.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/girl2_tree119.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/girl2_tree122.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/girl2_tree161.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/girl2_tree163.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/guy1_tree139.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/guy1_tree140.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/guy1_tree141.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/guy1_tree143.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/guy2_tree085.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/guy2_tree086.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/guy2_tree087.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/guy2_tree088.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/guy2_tree090.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/guy2_tree092.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/guy2_tree145.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/guy2_tree147.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior049.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior053.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior064.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior066.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior067.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior072.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior075.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior077.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior080.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior083.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior084.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior087.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior089.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior093.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior094.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior095.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior098.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior099.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior100.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior103.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior108.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior109.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior111.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior112.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior113.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior114.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior116.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior117.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior047.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior049.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior050.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior052.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior057.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior058.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior063.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior068.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior079.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior083.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior085.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior088.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior092.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior096.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior097.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior102.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior106.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior108.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior042.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior043.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior047.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior049.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior051.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior054.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior056.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior057.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior061.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior066.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior067.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior073.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior074.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior075.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior079.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior087.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior089.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior090.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior091.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior092.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior094.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior095.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior096.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior100.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior103.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior107.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior115.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior117.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior134.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior140.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior143.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior043.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior048.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior051.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior052.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior055.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior057.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior062.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior068.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior069.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior073.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior076.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior077.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior080.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior081.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior082.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior088.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior091.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior092.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior093.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior094.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior097.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior118.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior120.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior121.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior124.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior125.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior126.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior131.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior134.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior135.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior138.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior143.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior145.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior148.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy2_warrior051.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy2_warrior086.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy2_warrior111.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy2_warrior118.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy2_warrior122.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy2_warrior129.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy2_warrior131.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy2_warrior135.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy2_warrior137.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy2_warrior139.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy2_warrior145.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy2_warrior148.jpg. No pose was confidentlly detected.

preprocess TEST ชุด

if not is_skip_step_1:
  images_in_test_folder = os.path.join(IMAGES_ROOT, 'test')
  images_out_test_folder = 'poses_images_out_test'
  csvs_out_test_path = 'test_data.csv'

  preprocessor = MoveNetPreprocessor(
      images_in_folder=images_in_test_folder,
      images_out_folder=images_out_test_folder,
      csvs_out_path=csvs_out_test_path,
  )

  preprocessor.process(per_pose_class_limit=None)
Preprocessing chair
  0%|          | 0/84 [00:00<?, ?it/s]/tmpfs/src/tf_docs_env/lib/python3.7/site-packages/ipykernel_launcher.py:128: DeprecationWarning: `np.str` is a deprecated alias for the builtin `str`. To silence this warning, use `str` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.str_` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
100%|██████████| 84/84 [00:15<00:00,  5.51it/s]
Preprocessing cobra
100%|██████████| 116/116 [00:19<00:00,  6.10it/s]
Preprocessing dog
100%|██████████| 90/90 [00:14<00:00,  6.03it/s]
Preprocessing tree
100%|██████████| 96/96 [00:16<00:00,  5.98it/s]
Preprocessing warrior
100%|██████████| 109/109 [00:17<00:00,  6.38it/s]
Skipped yoga_cg/test/cobra/guy3_cobra048.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra050.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra051.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra052.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra053.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra054.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra055.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra056.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra057.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra058.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra059.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra060.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra062.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra069.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra075.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra077.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra081.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra124.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra131.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra132.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra134.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra135.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra136.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/dog/guy3_dog025.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/dog/guy3_dog026.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/dog/guy3_dog036.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/dog/guy3_dog042.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/dog/guy3_dog106.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/dog/guy3_dog108.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior042.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior043.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior044.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior045.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior046.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior047.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior048.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior050.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior051.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior052.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior053.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior054.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior055.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior056.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior059.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior060.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior062.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior063.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior065.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior066.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior068.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior070.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior071.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior072.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior073.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior074.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior075.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior076.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior077.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior079.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior080.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior081.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior082.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior083.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior084.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior085.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior086.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior087.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior088.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior089.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior137.jpg. No pose was confidentlly detected.

ส่วนที่ 2: ฝึกแบบจำลองการจัดประเภทท่าทางที่ใช้พิกัดจุดสังเกตเป็นข้อมูลเข้าและส่งออกป้ายกำกับที่คาดการณ์ไว้

คุณจะต้องสร้างโมเดล TensorFlow ที่ใช้พิกัดจุดสังเกตและคาดการณ์คลาสท่าที่บุคคลในภาพอินพุตทำ โมเดลประกอบด้วยสองโมเดลย่อย:

  • โมเดลย่อย 1 คำนวณการฝังท่าทาง (หรือที่เรียกว่าเวกเตอร์คุณลักษณะ) จากพิกัดจุดสังเกตที่ตรวจพบ
  • รุ่นย่อย 2 ฟีดก่อให้เกิดการฝังตัวผ่านหลาย Dense ชั้นที่จะคาดการณ์ระดับก่อให้เกิด

จากนั้น คุณจะฝึกโมเดลตามชุดข้อมูลที่ได้รับการประมวลผลล่วงหน้าในส่วนที่ 1

(ไม่บังคับ) ดาวน์โหลดชุดข้อมูลที่ประมวลผลล่วงหน้าหากคุณไม่ได้เรียกใช้ส่วนที่ 1

# Download the preprocessed CSV files which are the same as the output of step 1
if is_skip_step_1:
  !wget -O train_data.csv http://download.tensorflow.org/data/pose_classification/yoga_train_data.csv
  !wget -O test_data.csv http://download.tensorflow.org/data/pose_classification/yoga_test_data.csv

  csvs_out_train_path = 'train_data.csv'
  csvs_out_test_path = 'test_data.csv'
  is_skipped_step_1 = True

โหลด CSVs preprocessed ลง TRAIN และ TEST ชุดข้อมูล

def load_pose_landmarks(csv_path):
  """Loads a CSV created by MoveNetPreprocessor.

  Returns:
    X: Detected landmark coordinates and scores of shape (N, 17 * 3)
    y: Ground truth labels of shape (N, label_count)
    classes: The list of all class names found in the dataset
    dataframe: The CSV loaded as a Pandas dataframe features (X) and ground
      truth labels (y) to use later to train a pose classification model.
  """

  # Load the CSV file
  dataframe = pd.read_csv(csv_path)
  df_to_process = dataframe.copy()

  # Drop the file_name columns as you don't need it during training.
  df_to_process.drop(columns=['file_name'], inplace=True)

  # Extract the list of class names
  classes = df_to_process.pop('class_name').unique()

  # Extract the labels
  y = df_to_process.pop('class_no')

  # Convert the input features and labels into the correct format for training.
  X = df_to_process.astype('float64')
  y = keras.utils.to_categorical(y)

  return X, y, classes, dataframe

โหลดและแยกเดิม TRAIN ชุดข้อมูลที่เข้ามาใน TRAIN (85% ของข้อมูล) และ VALIDATE (ส่วนที่เหลืออีก 15%)

# Load the train data
X, y, class_names, _ = load_pose_landmarks(csvs_out_train_path)

# Split training data (X, y) into (X_train, y_train) and (X_val, y_val)
X_train, X_val, y_train, y_val = train_test_split(X, y,
                                                  test_size=0.15)
# Load the test data
X_test, y_test, _, df_test = load_pose_landmarks(csvs_out_test_path)

กำหนดฟังก์ชั่นเพื่อแปลงจุดสังเกตของท่าเป็นการฝังท่า (หรือที่รู้จักว่าเวกเตอร์คุณสมบัติ) สำหรับการจำแนกท่า

ถัดไป แปลงพิกัดจุดสังเกตเป็นเวกเตอร์จุดสนใจโดย:

  1. ย้ายศูนย์ท่าไปที่จุดเริ่มต้น
  2. ปรับขนาดท่าเพื่อให้ขนาดท่ากลายเป็น1
  3. ทำให้พิกัดเหล่านี้เป็นเวกเตอร์คุณลักษณะ

จากนั้นใช้เวกเตอร์คุณลักษณะนี้เพื่อฝึกตัวจำแนกท่าทางตามเครือข่ายประสาท

def get_center_point(landmarks, left_bodypart, right_bodypart):
  """Calculates the center point of the two given landmarks."""

  left = tf.gather(landmarks, left_bodypart.value, axis=1)
  right = tf.gather(landmarks, right_bodypart.value, axis=1)
  center = left * 0.5 + right * 0.5
  return center


def get_pose_size(landmarks, torso_size_multiplier=2.5):
  """Calculates pose size.

  It is the maximum of two values:
    * Torso size multiplied by `torso_size_multiplier`
    * Maximum distance from pose center to any pose landmark
  """
  # Hips center
  hips_center = get_center_point(landmarks, BodyPart.LEFT_HIP, 
                                 BodyPart.RIGHT_HIP)

  # Shoulders center
  shoulders_center = get_center_point(landmarks, BodyPart.LEFT_SHOULDER,
                                      BodyPart.RIGHT_SHOULDER)

  # Torso size as the minimum body size
  torso_size = tf.linalg.norm(shoulders_center - hips_center)

  # Pose center
  pose_center_new = get_center_point(landmarks, BodyPart.LEFT_HIP, 
                                     BodyPart.RIGHT_HIP)
  pose_center_new = tf.expand_dims(pose_center_new, axis=1)
  # Broadcast the pose center to the same size as the landmark vector to
  # perform substraction
  pose_center_new = tf.broadcast_to(pose_center_new,
                                    [tf.size(landmarks) // (17*2), 17, 2])

  # Dist to pose center
  d = tf.gather(landmarks - pose_center_new, 0, axis=0,
                name="dist_to_pose_center")
  # Max dist to pose center
  max_dist = tf.reduce_max(tf.linalg.norm(d, axis=0))

  # Normalize scale
  pose_size = tf.maximum(torso_size * torso_size_multiplier, max_dist)

  return pose_size


def normalize_pose_landmarks(landmarks):
  """Normalizes the landmarks translation by moving the pose center to (0,0) and
  scaling it to a constant pose size.
  """
  # Move landmarks so that the pose center becomes (0,0)
  pose_center = get_center_point(landmarks, BodyPart.LEFT_HIP, 
                                 BodyPart.RIGHT_HIP)
  pose_center = tf.expand_dims(pose_center, axis=1)
  # Broadcast the pose center to the same size as the landmark vector to perform
  # substraction
  pose_center = tf.broadcast_to(pose_center, 
                                [tf.size(landmarks) // (17*2), 17, 2])
  landmarks = landmarks - pose_center

  # Scale the landmarks to a constant pose size
  pose_size = get_pose_size(landmarks)
  landmarks /= pose_size

  return landmarks


def landmarks_to_embedding(landmarks_and_scores):
  """Converts the input landmarks into a pose embedding."""
  # Reshape the flat input into a matrix with shape=(17, 3)
  reshaped_inputs = keras.layers.Reshape((17, 3))(landmarks_and_scores)

  # Normalize landmarks 2D
  landmarks = normalize_pose_landmarks(reshaped_inputs[:, :, :2])

  # Flatten the normalized landmark coordinates into a vector
  embedding = keras.layers.Flatten()(landmarks)

  return embedding

กำหนดแบบจำลอง Keras สำหรับการจำแนกท่า

โมเดล Keras ของเราใช้จุดสังเกตของท่าที่ตรวจพบ จากนั้นคำนวณการฝังท่าและคาดการณ์คลาสของท่า

# Define the model
inputs = tf.keras.Input(shape=(51))
embedding = landmarks_to_embedding(inputs)

layer = keras.layers.Dense(128, activation=tf.nn.relu6)(embedding)
layer = keras.layers.Dropout(0.5)(layer)
layer = keras.layers.Dense(64, activation=tf.nn.relu6)(layer)
layer = keras.layers.Dropout(0.5)(layer)
outputs = keras.layers.Dense(len(class_names), activation="softmax")(layer)

model = keras.Model(inputs, outputs)
model.summary()
Model: "model"
__________________________________________________________________________________________________
 Layer (type)                   Output Shape         Param #     Connected to                     
==================================================================================================
 input_1 (InputLayer)           [(None, 51)]         0           []                               
                                                                                                  
 reshape (Reshape)              (None, 17, 3)        0           ['input_1[0][0]']                
                                                                                                  
 tf.__operators__.getitem (Slic  (None, 17, 2)       0           ['reshape[0][0]']                
 ingOpLambda)                                                                                     
                                                                                                  
 tf.compat.v1.gather (TFOpLambd  (None, 2)           0           ['tf.__operators__.getitem[0][0]'
 a)                                                              ]                                
                                                                                                  
 tf.compat.v1.gather_1 (TFOpLam  (None, 2)           0           ['tf.__operators__.getitem[0][0]'
 bda)                                                            ]                                
                                                                                                  
 tf.math.multiply (TFOpLambda)  (None, 2)            0           ['tf.compat.v1.gather[0][0]']    
                                                                                                  
 tf.math.multiply_1 (TFOpLambda  (None, 2)           0           ['tf.compat.v1.gather_1[0][0]']  
 )                                                                                                
                                                                                                  
 tf.__operators__.add (TFOpLamb  (None, 2)           0           ['tf.math.multiply[0][0]',       
 da)                                                              'tf.math.multiply_1[0][0]']     
                                                                                                  
 tf.compat.v1.size (TFOpLambda)  ()                  0           ['tf.__operators__.getitem[0][0]'
                                                                 ]                                
                                                                                                  
 tf.expand_dims (TFOpLambda)    (None, 1, 2)         0           ['tf.__operators__.add[0][0]']   
                                                                                                  
 tf.compat.v1.floor_div (TFOpLa  ()                  0           ['tf.compat.v1.size[0][0]']      
 mbda)                                                                                            
                                                                                                  
 tf.broadcast_to (TFOpLambda)   (None, 17, 2)        0           ['tf.expand_dims[0][0]',         
                                                                  'tf.compat.v1.floor_div[0][0]'] 
                                                                                                  
 tf.math.subtract (TFOpLambda)  (None, 17, 2)        0           ['tf.__operators__.getitem[0][0]'
                                                                 , 'tf.broadcast_to[0][0]']       
                                                                                                  
 tf.compat.v1.gather_6 (TFOpLam  (None, 2)           0           ['tf.math.subtract[0][0]']       
 bda)                                                                                             
                                                                                                  
 tf.compat.v1.gather_7 (TFOpLam  (None, 2)           0           ['tf.math.subtract[0][0]']       
 bda)                                                                                             
                                                                                                  
 tf.math.multiply_6 (TFOpLambda  (None, 2)           0           ['tf.compat.v1.gather_6[0][0]']  
 )                                                                                                
                                                                                                  
 tf.math.multiply_7 (TFOpLambda  (None, 2)           0           ['tf.compat.v1.gather_7[0][0]']  
 )                                                                                                
                                                                                                  
 tf.__operators__.add_3 (TFOpLa  (None, 2)           0           ['tf.math.multiply_6[0][0]',     
 mbda)                                                            'tf.math.multiply_7[0][0]']     
                                                                                                  
 tf.compat.v1.size_1 (TFOpLambd  ()                  0           ['tf.math.subtract[0][0]']       
 a)                                                                                               
                                                                                                  
 tf.compat.v1.gather_4 (TFOpLam  (None, 2)           0           ['tf.math.subtract[0][0]']       
 bda)                                                                                             
                                                                                                  
 tf.compat.v1.gather_5 (TFOpLam  (None, 2)           0           ['tf.math.subtract[0][0]']       
 bda)                                                                                             
                                                                                                  
 tf.compat.v1.gather_2 (TFOpLam  (None, 2)           0           ['tf.math.subtract[0][0]']       
 bda)                                                                                             
                                                                                                  
 tf.compat.v1.gather_3 (TFOpLam  (None, 2)           0           ['tf.math.subtract[0][0]']       
 bda)                                                                                             
                                                                                                  
 tf.expand_dims_1 (TFOpLambda)  (None, 1, 2)         0           ['tf.__operators__.add_3[0][0]'] 
                                                                                                  
 tf.compat.v1.floor_div_1 (TFOp  ()                  0           ['tf.compat.v1.size_1[0][0]']    
 Lambda)                                                                                          
                                                                                                  
 tf.math.multiply_4 (TFOpLambda  (None, 2)           0           ['tf.compat.v1.gather_4[0][0]']  
 )                                                                                                
                                                                                                  
 tf.math.multiply_5 (TFOpLambda  (None, 2)           0           ['tf.compat.v1.gather_5[0][0]']  
 )                                                                                                
                                                                                                  
 tf.math.multiply_2 (TFOpLambda  (None, 2)           0           ['tf.compat.v1.gather_2[0][0]']  
 )                                                                                                
                                                                                                  
 tf.math.multiply_3 (TFOpLambda  (None, 2)           0           ['tf.compat.v1.gather_3[0][0]']  
 )                                                                                                
                                                                                                  
 tf.broadcast_to_1 (TFOpLambda)  (None, 17, 2)       0           ['tf.expand_dims_1[0][0]',       
                                                                  'tf.compat.v1.floor_div_1[0][0]'
                                                                 ]                                
                                                                                                  
 tf.__operators__.add_2 (TFOpLa  (None, 2)           0           ['tf.math.multiply_4[0][0]',     
 mbda)                                                            'tf.math.multiply_5[0][0]']     
                                                                                                  
 tf.__operators__.add_1 (TFOpLa  (None, 2)           0           ['tf.math.multiply_2[0][0]',     
 mbda)                                                            'tf.math.multiply_3[0][0]']     
                                                                                                  
 tf.math.subtract_2 (TFOpLambda  (None, 17, 2)       0           ['tf.math.subtract[0][0]',       
 )                                                                'tf.broadcast_to_1[0][0]']      
                                                                                                  
 tf.math.subtract_1 (TFOpLambda  (None, 2)           0           ['tf.__operators__.add_2[0][0]', 
 )                                                                'tf.__operators__.add_1[0][0]'] 
                                                                                                  
 tf.compat.v1.gather_8 (TFOpLam  (17, 2)             0           ['tf.math.subtract_2[0][0]']     
 bda)                                                                                             
                                                                                                  
 tf.compat.v1.norm (TFOpLambda)  ()                  0           ['tf.math.subtract_1[0][0]']     
                                                                                                  
 tf.compat.v1.norm_1 (TFOpLambd  (2,)                0           ['tf.compat.v1.gather_8[0][0]']  
 a)                                                                                               
                                                                                                  
 tf.math.multiply_8 (TFOpLambda  ()                  0           ['tf.compat.v1.norm[0][0]']      
 )                                                                                                
                                                                                                  
 tf.math.reduce_max (TFOpLambda  ()                  0           ['tf.compat.v1.norm_1[0][0]']    
 )                                                                                                
                                                                                                  
 tf.math.maximum (TFOpLambda)   ()                   0           ['tf.math.multiply_8[0][0]',     
                                                                  'tf.math.reduce_max[0][0]']     
                                                                                                  
 tf.math.truediv (TFOpLambda)   (None, 17, 2)        0           ['tf.math.subtract[0][0]',       
                                                                  'tf.math.maximum[0][0]']        
                                                                                                  
 flatten (Flatten)              (None, 34)           0           ['tf.math.truediv[0][0]']        
                                                                                                  
 dense (Dense)                  (None, 128)          4480        ['flatten[0][0]']                
                                                                                                  
 dropout (Dropout)              (None, 128)          0           ['dense[0][0]']                  
                                                                                                  
 dense_1 (Dense)                (None, 64)           8256        ['dropout[0][0]']                
                                                                                                  
 dropout_1 (Dropout)            (None, 64)           0           ['dense_1[0][0]']                
                                                                                                  
 dense_2 (Dense)                (None, 5)            325         ['dropout_1[0][0]']              
                                                                                                  
==================================================================================================
Total params: 13,061
Trainable params: 13,061
Non-trainable params: 0
__________________________________________________________________________________________________
model.compile(
    optimizer='adam',
    loss='categorical_crossentropy',
    metrics=['accuracy']
)

# Add a checkpoint callback to store the checkpoint that has the highest
# validation accuracy.
checkpoint_path = "weights.best.hdf5"
checkpoint = keras.callbacks.ModelCheckpoint(checkpoint_path,
                             monitor='val_accuracy',
                             verbose=1,
                             save_best_only=True,
                             mode='max')
earlystopping = keras.callbacks.EarlyStopping(monitor='val_accuracy', 
                                              patience=20)

# Start training
history = model.fit(X_train, y_train,
                    epochs=200,
                    batch_size=16,
                    validation_data=(X_val, y_val),
                    callbacks=[checkpoint, earlystopping])
Epoch 1/200
19/37 [==============>...............] - ETA: 0s - loss: 1.5703 - accuracy: 0.3684 
Epoch 00001: val_accuracy improved from -inf to 0.64706, saving model to weights.best.hdf5
37/37 [==============================] - 1s 11ms/step - loss: 1.5090 - accuracy: 0.4602 - val_loss: 1.3352 - val_accuracy: 0.6471
Epoch 2/200
20/37 [===============>..............] - ETA: 0s - loss: 1.3372 - accuracy: 0.4844
Epoch 00002: val_accuracy improved from 0.64706 to 0.67647, saving model to weights.best.hdf5
37/37 [==============================] - 0s 4ms/step - loss: 1.2375 - accuracy: 0.5190 - val_loss: 1.0193 - val_accuracy: 0.6765
Epoch 3/200
20/37 [===============>..............] - ETA: 0s - loss: 1.0596 - accuracy: 0.5469
Epoch 00003: val_accuracy improved from 0.67647 to 0.75490, saving model to weights.best.hdf5
37/37 [==============================] - 0s 4ms/step - loss: 1.0096 - accuracy: 0.5796 - val_loss: 0.8397 - val_accuracy: 0.7549
Epoch 4/200
21/37 [================>.............] - ETA: 0s - loss: 0.8922 - accuracy: 0.6220
Epoch 00004: val_accuracy improved from 0.75490 to 0.81373, saving model to weights.best.hdf5
37/37 [==============================] - 0s 4ms/step - loss: 0.8798 - accuracy: 0.6349 - val_loss: 0.7103 - val_accuracy: 0.8137
Epoch 5/200
20/37 [===============>..............] - ETA: 0s - loss: 0.7895 - accuracy: 0.6875
Epoch 00005: val_accuracy improved from 0.81373 to 0.82353, saving model to weights.best.hdf5
37/37 [==============================] - 0s 4ms/step - loss: 0.7810 - accuracy: 0.6903 - val_loss: 0.6120 - val_accuracy: 0.8235
Epoch 6/200
20/37 [===============>..............] - ETA: 0s - loss: 0.7324 - accuracy: 0.7250
Epoch 00006: val_accuracy improved from 0.82353 to 0.92157, saving model to weights.best.hdf5
37/37 [==============================] - 0s 4ms/step - loss: 0.7263 - accuracy: 0.7093 - val_loss: 0.5297 - val_accuracy: 0.9216
Epoch 7/200
19/37 [==============>...............] - ETA: 0s - loss: 0.6852 - accuracy: 0.7467
Epoch 00007: val_accuracy did not improve from 0.92157
37/37 [==============================] - 0s 4ms/step - loss: 0.6450 - accuracy: 0.7595 - val_loss: 0.4635 - val_accuracy: 0.8922
Epoch 8/200
20/37 [===============>..............] - ETA: 0s - loss: 0.6007 - accuracy: 0.7719
Epoch 00008: val_accuracy did not improve from 0.92157
37/37 [==============================] - 0s 4ms/step - loss: 0.5751 - accuracy: 0.7837 - val_loss: 0.4012 - val_accuracy: 0.9216
Epoch 9/200
20/37 [===============>..............] - ETA: 0s - loss: 0.5358 - accuracy: 0.8125
Epoch 00009: val_accuracy improved from 0.92157 to 0.93137, saving model to weights.best.hdf5
37/37 [==============================] - 0s 4ms/step - loss: 0.5272 - accuracy: 0.8097 - val_loss: 0.3547 - val_accuracy: 0.9314
Epoch 10/200
20/37 [===============>..............] - ETA: 0s - loss: 0.5200 - accuracy: 0.8094
Epoch 00010: val_accuracy improved from 0.93137 to 0.98039, saving model to weights.best.hdf5
37/37 [==============================] - 0s 5ms/step - loss: 0.5051 - accuracy: 0.8218 - val_loss: 0.3014 - val_accuracy: 0.9804
Epoch 11/200
19/37 [==============>...............] - ETA: 0s - loss: 0.4413 - accuracy: 0.8322
Epoch 00011: val_accuracy did not improve from 0.98039
37/37 [==============================] - 0s 4ms/step - loss: 0.4509 - accuracy: 0.8374 - val_loss: 0.2786 - val_accuracy: 0.9706
Epoch 12/200
20/37 [===============>..............] - ETA: 0s - loss: 0.4323 - accuracy: 0.8156
Epoch 00012: val_accuracy improved from 0.98039 to 0.99020, saving model to weights.best.hdf5
37/37 [==============================] - 0s 5ms/step - loss: 0.4377 - accuracy: 0.8253 - val_loss: 0.2440 - val_accuracy: 0.9902
Epoch 13/200
20/37 [===============>..............] - ETA: 0s - loss: 0.4037 - accuracy: 0.8719
Epoch 00013: val_accuracy did not improve from 0.99020
37/37 [==============================] - 0s 4ms/step - loss: 0.4187 - accuracy: 0.8668 - val_loss: 0.2109 - val_accuracy: 0.9804
Epoch 14/200
20/37 [===============>..............] - ETA: 0s - loss: 0.3664 - accuracy: 0.8813
Epoch 00014: val_accuracy did not improve from 0.99020
37/37 [==============================] - 0s 4ms/step - loss: 0.3733 - accuracy: 0.8772 - val_loss: 0.2030 - val_accuracy: 0.9804
Epoch 15/200
20/37 [===============>..............] - ETA: 0s - loss: 0.3708 - accuracy: 0.8781
Epoch 00015: val_accuracy did not improve from 0.99020
37/37 [==============================] - 0s 4ms/step - loss: 0.3684 - accuracy: 0.8754 - val_loss: 0.1765 - val_accuracy: 0.9902
Epoch 16/200
21/37 [================>.............] - ETA: 0s - loss: 0.3238 - accuracy: 0.9137
Epoch 00016: val_accuracy did not improve from 0.99020
37/37 [==============================] - 0s 4ms/step - loss: 0.3213 - accuracy: 0.9100 - val_loss: 0.1662 - val_accuracy: 0.9804
Epoch 17/200
20/37 [===============>..............] - ETA: 0s - loss: 0.2739 - accuracy: 0.9281
Epoch 00017: val_accuracy did not improve from 0.99020
37/37 [==============================] - 0s 4ms/step - loss: 0.3015 - accuracy: 0.9100 - val_loss: 0.1423 - val_accuracy: 0.9804
Epoch 18/200
20/37 [===============>..............] - ETA: 0s - loss: 0.3076 - accuracy: 0.9062
Epoch 00018: val_accuracy did not improve from 0.99020
37/37 [==============================] - 0s 4ms/step - loss: 0.3022 - accuracy: 0.9048 - val_loss: 0.1407 - val_accuracy: 0.9804
Epoch 19/200
20/37 [===============>..............] - ETA: 0s - loss: 0.2719 - accuracy: 0.9250
Epoch 00019: val_accuracy did not improve from 0.99020
37/37 [==============================] - 0s 4ms/step - loss: 0.2697 - accuracy: 0.9291 - val_loss: 0.1191 - val_accuracy: 0.9902
Epoch 20/200
20/37 [===============>..............] - ETA: 0s - loss: 0.2960 - accuracy: 0.9031
Epoch 00020: val_accuracy did not improve from 0.99020
37/37 [==============================] - 0s 4ms/step - loss: 0.2775 - accuracy: 0.9100 - val_loss: 0.1120 - val_accuracy: 0.9902
Epoch 21/200
20/37 [===============>..............] - ETA: 0s - loss: 0.2590 - accuracy: 0.9250
Epoch 00021: val_accuracy did not improve from 0.99020
37/37 [==============================] - 0s 4ms/step - loss: 0.2537 - accuracy: 0.9273 - val_loss: 0.1022 - val_accuracy: 0.9902
Epoch 22/200
20/37 [===============>..............] - ETA: 0s - loss: 0.2504 - accuracy: 0.9344
Epoch 00022: val_accuracy did not improve from 0.99020
37/37 [==============================] - 0s 4ms/step - loss: 0.2661 - accuracy: 0.9204 - val_loss: 0.0976 - val_accuracy: 0.9902
Epoch 23/200
20/37 [===============>..............] - ETA: 0s - loss: 0.2384 - accuracy: 0.9156
Epoch 00023: val_accuracy did not improve from 0.99020
37/37 [==============================] - 0s 4ms/step - loss: 0.2182 - accuracy: 0.9308 - val_loss: 0.0944 - val_accuracy: 0.9902
Epoch 24/200
20/37 [===============>..............] - ETA: 0s - loss: 0.2157 - accuracy: 0.9375
Epoch 00024: val_accuracy did not improve from 0.99020
37/37 [==============================] - 0s 4ms/step - loss: 0.2031 - accuracy: 0.9412 - val_loss: 0.0844 - val_accuracy: 0.9902
Epoch 25/200
20/37 [===============>..............] - ETA: 0s - loss: 0.1944 - accuracy: 0.9469
Epoch 00025: val_accuracy did not improve from 0.99020
37/37 [==============================] - 0s 4ms/step - loss: 0.2080 - accuracy: 0.9343 - val_loss: 0.0811 - val_accuracy: 0.9902
Epoch 26/200
20/37 [===============>..............] - ETA: 0s - loss: 0.2232 - accuracy: 0.9312
Epoch 00026: val_accuracy did not improve from 0.99020
37/37 [==============================] - 0s 4ms/step - loss: 0.2033 - accuracy: 0.9394 - val_loss: 0.0703 - val_accuracy: 0.9902
Epoch 27/200
20/37 [===============>..............] - ETA: 0s - loss: 0.2120 - accuracy: 0.9281
Epoch 00027: val_accuracy did not improve from 0.99020
37/37 [==============================] - 0s 4ms/step - loss: 0.1845 - accuracy: 0.9481 - val_loss: 0.0708 - val_accuracy: 0.9902
Epoch 28/200
20/37 [===============>..............] - ETA: 0s - loss: 0.2696 - accuracy: 0.9156
Epoch 00028: val_accuracy did not improve from 0.99020
37/37 [==============================] - 0s 4ms/step - loss: 0.2355 - accuracy: 0.9273 - val_loss: 0.0679 - val_accuracy: 0.9902
Epoch 29/200
20/37 [===============>..............] - ETA: 0s - loss: 0.1794 - accuracy: 0.9531
Epoch 00029: val_accuracy did not improve from 0.99020
37/37 [==============================] - 0s 4ms/step - loss: 0.1938 - accuracy: 0.9498 - val_loss: 0.0623 - val_accuracy: 0.9902
Epoch 30/200
20/37 [===============>..............] - ETA: 0s - loss: 0.1831 - accuracy: 0.9406
Epoch 00030: val_accuracy did not improve from 0.99020
37/37 [==============================] - 0s 4ms/step - loss: 0.1758 - accuracy: 0.9498 - val_loss: 0.0599 - val_accuracy: 0.9902
Epoch 31/200
20/37 [===============>..............] - ETA: 0s - loss: 0.1967 - accuracy: 0.9375
Epoch 00031: val_accuracy did not improve from 0.99020
37/37 [==============================] - 0s 4ms/step - loss: 0.1724 - accuracy: 0.9516 - val_loss: 0.0565 - val_accuracy: 0.9902
Epoch 32/200
20/37 [===============>..............] - ETA: 0s - loss: 0.1868 - accuracy: 0.9219
Epoch 00032: val_accuracy did not improve from 0.99020
37/37 [==============================] - 0s 4ms/step - loss: 0.1676 - accuracy: 0.9360 - val_loss: 0.0503 - val_accuracy: 0.9902
# Visualize the training history to see whether you're overfitting.
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('Model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['TRAIN', 'VAL'], loc='lower right')
plt.show()

png

# Evaluate the model using the TEST dataset
loss, accuracy = model.evaluate(X_test, y_test)
14/14 [==============================] - 0s 2ms/step - loss: 0.0612 - accuracy: 0.9976

วาดเมทริกซ์ความสับสนเพื่อให้เข้าใจประสิทธิภาพของโมเดลดีขึ้น

def plot_confusion_matrix(cm, classes,
                          normalize=False,
                          title='Confusion matrix',
                          cmap=plt.cm.Blues):
  """Plots the confusion matrix."""
  if normalize:
    cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
    print("Normalized confusion matrix")
  else:
    print('Confusion matrix, without normalization')

  plt.imshow(cm, interpolation='nearest', cmap=cmap)
  plt.title(title)
  plt.colorbar()
  tick_marks = np.arange(len(classes))
  plt.xticks(tick_marks, classes, rotation=55)
  plt.yticks(tick_marks, classes)
  fmt = '.2f' if normalize else 'd'
  thresh = cm.max() / 2.
  for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
    plt.text(j, i, format(cm[i, j], fmt),
              horizontalalignment="center",
              color="white" if cm[i, j] > thresh else "black")

  plt.ylabel('True label')
  plt.xlabel('Predicted label')
  plt.tight_layout()

# Classify pose in the TEST dataset using the trained model
y_pred = model.predict(X_test)

# Convert the prediction result to class name
y_pred_label = [class_names[i] for i in np.argmax(y_pred, axis=1)]
y_true_label = [class_names[i] for i in np.argmax(y_test, axis=1)]

# Plot the confusion matrix
cm = confusion_matrix(np.argmax(y_test, axis=1), np.argmax(y_pred, axis=1))
plot_confusion_matrix(cm,
                      class_names,
                      title ='Confusion Matrix of Pose Classification Model')

# Print the classification report
print('\nClassification Report:\n', classification_report(y_true_label,
                                                          y_pred_label))
Confusion matrix, without normalization

Classification Report:
               precision    recall  f1-score   support

       chair       1.00      1.00      1.00        84
       cobra       0.99      1.00      0.99        93
         dog       1.00      1.00      1.00        84
        tree       1.00      1.00      1.00        96
     warrior       1.00      0.99      0.99        68

    accuracy                           1.00       425
   macro avg       1.00      1.00      1.00       425
weighted avg       1.00      1.00      1.00       425

png

(ไม่บังคับ) ตรวจสอบการคาดคะเนที่ไม่ถูกต้อง

คุณสามารถดูการโพสท่าจากการ TEST ชุดข้อมูลที่ได้รับการคาดการณ์ไม่ถูกต้องเพื่อดูว่ามีความถูกต้องแบบสามารถปรับปรุง

if is_skip_step_1:
  raise RuntimeError('You must have run step 1 to run this cell.')

# If step 1 was skipped, skip this step.
IMAGE_PER_ROW = 3
MAX_NO_OF_IMAGE_TO_PLOT = 30

# Extract the list of incorrectly predicted poses
false_predict = [id_in_df for id_in_df in range(len(y_test)) \
                if y_pred_label[id_in_df] != y_true_label[id_in_df]]
if len(false_predict) > MAX_NO_OF_IMAGE_TO_PLOT:
  false_predict = false_predict[:MAX_NO_OF_IMAGE_TO_PLOT]

# Plot the incorrectly predicted images
row_count = len(false_predict) // IMAGE_PER_ROW + 1
fig = plt.figure(figsize=(10 * IMAGE_PER_ROW, 10 * row_count))
for i, id_in_df in enumerate(false_predict):
  ax = fig.add_subplot(row_count, IMAGE_PER_ROW, i + 1)
  image_path = os.path.join(images_out_test_folder,
                            df_test.iloc[id_in_df]['file_name'])

  image = cv2.imread(image_path)
  plt.title("Predict: %s; Actual: %s"
            % (y_pred_label[id_in_df], y_true_label[id_in_df]))
  plt.imshow(cv2.cvtColor(image, cv2.COLOR_BGR2RGB))
plt.show()

png

ส่วนที่ 3: แปลงโมเดลการจัดประเภทท่าเป็น TensorFlow Lite

คุณจะแปลงโมเดลการจัดประเภท Keras เป็นรูปแบบ TensorFlow Lite เพื่อให้คุณสามารถปรับใช้กับแอปมือถือ เว็บเบราว์เซอร์ และอุปกรณ์ IoT เมื่อมีการแปลงรูปแบบที่คุณจะใช้ quantization ช่วงแบบไดนามิก ที่จะลดขนาดการก่อให้เกิดการจัดหมวดหมู่ TensorFlow Lite รุ่นโดยประมาณ 4 ครั้งกับการสูญเสียความถูกต้องไม่มีนัยสำคัญ

converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_model = converter.convert()

print('Model size: %dKB' % (len(tflite_model) / 1024))

with open('pose_classifier.tflite', 'wb') as f:
  f.write(tflite_model)
2021-12-21 12:12:00.560331: W tensorflow/python/util/util.cc:368] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
INFO:tensorflow:Assets written to: /tmp/tmpr1ewa_xj/assets
2021-12-21 12:12:02.324896: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:363] Ignored output_format.
2021-12-21 12:12:02.324941: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:366] Ignored drop_control_dependency.
WARNING:absl:Buffer deduplication procedure will be skipped when flatbuffer library is not properly loaded
Model size: 26KB

จากนั้น คุณจะเขียนไฟล์ป้ายกำกับซึ่งมีการแมปจากดัชนีคลาสไปยังชื่อคลาสที่มนุษย์อ่านได้

with open('pose_labels.txt', 'w') as f:
  f.write('\n'.join(class_names))

เมื่อคุณใช้การหาปริมาณเพื่อลดขนาดโมเดล มาประเมินโมเดล TFLite ที่ควอนตัมเพื่อตรวจสอบว่าการลดความแม่นยำนั้นยอมรับได้หรือไม่

def evaluate_model(interpreter, X, y_true):
  """Evaluates the given TFLite model and return its accuracy."""
  input_index = interpreter.get_input_details()[0]["index"]
  output_index = interpreter.get_output_details()[0]["index"]

  # Run predictions on all given poses.
  y_pred = []
  for i in range(len(y_true)):
    # Pre-processing: add batch dimension and convert to float32 to match with
    # the model's input data format.
    test_image = X[i: i + 1].astype('float32')
    interpreter.set_tensor(input_index, test_image)

    # Run inference.
    interpreter.invoke()

    # Post-processing: remove batch dimension and find the class with highest
    # probability.
    output = interpreter.tensor(output_index)
    predicted_label = np.argmax(output()[0])
    y_pred.append(predicted_label)

  # Compare prediction results with ground truth labels to calculate accuracy.
  y_pred = keras.utils.to_categorical(y_pred)
  return accuracy_score(y_true, y_pred)

# Evaluate the accuracy of the converted TFLite model
classifier_interpreter = tf.lite.Interpreter(model_content=tflite_model)
classifier_interpreter.allocate_tensors()
print('Accuracy of TFLite model: %s' %
      evaluate_model(classifier_interpreter, X_test, y_test))
Accuracy of TFLite model: 1.0

ตอนนี้คุณสามารถดาวน์โหลดรุ่น TFLite ( pose_classifier.tflite ) และไฟล์ฉลาก ( pose_labels.txt ) ไปโพสท่าที่กำหนดเองประเภท ดู Android และ งูหลาม / ราสเบอร์รี่ Pi แอปตัวอย่างสำหรับตัวอย่างแบบ end-to-end ของวิธีการใช้ TFLite ก่อให้เกิดรูปแบบการจัดหมวดหมู่

zip pose_classifier.zip pose_labels.txt pose_classifier.tflite
adding: pose_labels.txt (stored 0%)
  adding: pose_classifier.tflite (deflated 35%)
# Download the zip archive if running on Colab.
try:
  from google.colab import files
  files.download('pose_classifier.zip')
except:
  pass