Este guia treina uma rede neural artificial para classificar imagens de roupas, como tênis e camisas , salva o modelo treinado, e, em seguida, serve-lo com TensorFlow Servindo . O foco é sobre o Cumprimento de TensorFlow, ao invés da modelagem e treinamento em TensorFlow, portanto, para um exemplo completo que incide sobre a modelagem e formação ver o exemplo classificação básica .
Este guia utiliza tf.keras , uma API de alto nível para modelos de construção e de trem em TensorFlow.
import sys
# Confirm that we're using Python 3
assert sys.version_info.major == 3, 'Oops, not running Python 3. Use Runtime > Change runtime type'
# TensorFlow and tf.keras
print("Installing dependencies for Colab environment")
!pip install -Uq grpcio==1.26.0
import tensorflow as tf
from tensorflow import keras
# Helper libraries
import numpy as np
import matplotlib.pyplot as plt
import os
import subprocess
print('TensorFlow version: {}'.format(tf.__version__))
Crie o seu modelo
Importe o conjunto de dados Fashion MNIST
Este guia usa a moda MNIST conjunto de dados que contém 70.000 imagens em tons de cinza em 10 categorias. As imagens mostram peças individuais de roupa em baixa resolução (28 por 28 pixels), como pode ser visto aqui:
Figura 1. amostras Forma-MNIST (por Zalando, licença MIT). |
Moda MNIST pretende ser um substituto para o clássico MNIST conjunto de dados, muitas vezes usado como o "Olá, mundo" dos programas de aprendizagem de máquina para a visão de computador. Você pode acessar o Fashion MNIST diretamente do TensorFlow, basta importar e carregar os dados.
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
# scale the values to 0.0 to 1.0
train_images = train_images / 255.0
test_images = test_images / 255.0
# reshape for feeding into the model
train_images = train_images.reshape(train_images.shape[0], 28, 28, 1)
test_images = test_images.reshape(test_images.shape[0], 28, 28, 1)
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
print('\ntrain_images.shape: {}, of {}'.format(train_images.shape, train_images.dtype))
print('test_images.shape: {}, of {}'.format(test_images.shape, test_images.dtype))
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-labels-idx1-ubyte.gz 32768/29515 [=================================] - 0s 0us/step Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-images-idx3-ubyte.gz 26427392/26421880 [==============================] - 0s 0us/step Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-labels-idx1-ubyte.gz 8192/5148 [===============================================] - 0s 0us/step Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-images-idx3-ubyte.gz 4423680/4422102 [==============================] - 0s 0us/step train_images.shape: (60000, 28, 28, 1), of float64 test_images.shape: (10000, 28, 28, 1), of float64
Treine e avalie seu modelo
Vamos usar o CNN mais simples possível, já que não estamos focados na parte de modelagem.
model = keras.Sequential([
keras.layers.Conv2D(input_shape=(28,28,1), filters=8, kernel_size=3,
strides=2, activation='relu', name='Conv1'),
keras.layers.Flatten(),
keras.layers.Dense(10, name='Dense')
])
model.summary()
testing = False
epochs = 5
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[keras.metrics.SparseCategoricalAccuracy()])
model.fit(train_images, train_labels, epochs=epochs)
test_loss, test_acc = model.evaluate(test_images, test_labels)
print('\nTest accuracy: {}'.format(test_acc))
2021-12-04 10:29:34.128871: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcusolver.so.10'; dlerror: libcusolver.so.10: cannot open shared object file: No such file or directory 2021-12-04 10:29:34.129907: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1757] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform. Skipping registering GPU devices... Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= Conv1 (Conv2D) (None, 13, 13, 8) 80 _________________________________________________________________ flatten (Flatten) (None, 1352) 0 _________________________________________________________________ Dense (Dense) (None, 10) 13530 ================================================================= Total params: 13,610 Trainable params: 13,610 Non-trainable params: 0 _________________________________________________________________ Epoch 1/5 1875/1875 [==============================] - 4s 2ms/step - loss: 0.7204 - sparse_categorical_accuracy: 0.7549 Epoch 2/5 1875/1875 [==============================] - 4s 2ms/step - loss: 0.3997 - sparse_categorical_accuracy: 0.8611 Epoch 3/5 1875/1875 [==============================] - 4s 2ms/step - loss: 0.3580 - sparse_categorical_accuracy: 0.8754 Epoch 4/5 1875/1875 [==============================] - 4s 2ms/step - loss: 0.3399 - sparse_categorical_accuracy: 0.8780 Epoch 5/5 1875/1875 [==============================] - 4s 2ms/step - loss: 0.3232 - sparse_categorical_accuracy: 0.8849 313/313 [==============================] - 0s 1ms/step - loss: 0.3586 - sparse_categorical_accuracy: 0.8738 Test accuracy: 0.8737999796867371
Salve seu modelo
Para carregar o nosso modelo treinado em TensorFlow Servindo primeiro precisamos salvá-lo em SavedModel formato. Isso criará um arquivo protobuf em uma hierarquia de diretórios bem definida e incluirá um número de versão. TensorFlow Dose nos permite selecionar qual versão de um modelo, ou "que possa ser veiculado" nós queremos usar quando fazemos pedidos de inferência. Cada versão será exportada para um subdiretório diferente no caminho fornecido.
# Fetch the Keras session and save the model
# The signature definition is defined by the input and output tensors,
# and stored with the default serving key
import tempfile
MODEL_DIR = tempfile.gettempdir()
version = 1
export_path = os.path.join(MODEL_DIR, str(version))
print('export_path = {}\n'.format(export_path))
tf.keras.models.save_model(
model,
export_path,
overwrite=True,
include_optimizer=True,
save_format=None,
signatures=None,
options=None
)
print('\nSaved model:')
!ls -l {export_path}
export_path = /tmp/1 2021-12-04 10:29:53.392905: W tensorflow/python/util/util.cc:348] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them. INFO:tensorflow:Assets written to: /tmp/1/assets Saved model: total 88 drwxr-xr-x 2 kbuilder kbuilder 4096 Dec 4 10:29 assets -rw-rw-r-- 1 kbuilder kbuilder 78055 Dec 4 10:29 saved_model.pb drwxr-xr-x 2 kbuilder kbuilder 4096 Dec 4 10:29 variables
Examine o seu modelo salvo
Vamos usar o utilitário de linha de comando saved_model_cli
olhar para os MetaGraphDefs (os modelos) e SignatureDefs (os métodos que você pode chamar) na nossa SavedModel. Veja essa discussão do SavedModel CLI no Guia do TensorFlow.
saved_model_cli show --dir {export_path} --all
MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs: signature_def['__saved_model_init_op']: The given SavedModel SignatureDef contains the following input(s): The given SavedModel SignatureDef contains the following output(s): outputs['__saved_model_init_op'] tensor_info: dtype: DT_INVALID shape: unknown_rank name: NoOp Method name is: signature_def['serving_default']: The given SavedModel SignatureDef contains the following input(s): inputs['Conv1_input'] tensor_info: dtype: DT_FLOAT shape: (-1, 28, 28, 1) name: serving_default_Conv1_input:0 The given SavedModel SignatureDef contains the following output(s): outputs['Dense'] tensor_info: dtype: DT_FLOAT shape: (-1, 10) name: StatefulPartitionedCall:0 Method name is: tensorflow/serving/predict Defined Functions: Function Name: '__call__' Option #1 Callable with: Argument #1 Conv1_input: TensorSpec(shape=(None, 28, 28, 1), dtype=tf.float32, name='Conv1_input') Argument #2 DType: bool Value: False Argument #3 DType: NoneType Value: None Option #2 Callable with: Argument #1 inputs: TensorSpec(shape=(None, 28, 28, 1), dtype=tf.float32, name='inputs') Argument #2 DType: bool Value: False Argument #3 DType: NoneType Value: None Option #3 Callable with: Argument #1 inputs: TensorSpec(shape=(None, 28, 28, 1), dtype=tf.float32, name='inputs') Argument #2 DType: bool Value: True Argument #3 DType: NoneType Value: None Option #4 Callable with: Argument #1 Conv1_input: TensorSpec(shape=(None, 28, 28, 1), dtype=tf.float32, name='Conv1_input') Argument #2 DType: bool Value: True Argument #3 DType: NoneType Value: None Function Name: '_default_save_signature' Option #1 Callable with: Argument #1 Conv1_input: TensorSpec(shape=(None, 28, 28, 1), dtype=tf.float32, name='Conv1_input') Function Name: 'call_and_return_all_conditional_losses' Option #1 Callable with: Argument #1 inputs: TensorSpec(shape=(None, 28, 28, 1), dtype=tf.float32, name='inputs') Argument #2 DType: bool Value: False Argument #3 DType: NoneType Value: None Option #2 Callable with: Argument #1 Conv1_input: TensorSpec(shape=(None, 28, 28, 1), dtype=tf.float32, name='Conv1_input') Argument #2 DType: bool Value: True Argument #3 DType: NoneType Value: None Option #3 Callable with: Argument #1 Conv1_input: TensorSpec(shape=(None, 28, 28, 1), dtype=tf.float32, name='Conv1_input') Argument #2 DType: bool Value: False Argument #3 DType: NoneType Value: None Option #4 Callable with: Argument #1 inputs: TensorSpec(shape=(None, 28, 28, 1), dtype=tf.float32, name='inputs') Argument #2 DType: bool Value: True Argument #3 DType: NoneType Value: None
Isso nos diz muito sobre nosso modelo! Neste caso apenas treinamos nosso modelo, então já sabemos as entradas e saídas, mas se não o fizéssemos, esta seria uma informação importante. Não nos diz tudo, como o fato de que se trata de dados de imagem em tons de cinza, por exemplo, mas é um ótimo começo.
Sirva seu modelo com o TensorFlow Serving
Adicione o URI de distribuição do TensorFlow Serving como uma fonte de pacote:
Estamos nos preparando para instalar TensorFlow Servindo usando Aptidão uma vez que este Colab executado em um ambiente Debian. Nós vamos adicionar o tensorflow-model-server
pacote para a lista de pacotes que Aptidão conhece. Observe que estamos executando como root.
import sys
# We need sudo prefix if not on a Google Colab.
if 'google.colab' not in sys.modules:
SUDO_IF_NEEDED = 'sudo'
else:
SUDO_IF_NEEDED = ''
# This is the same as you would do from your command line, but without the [arch=amd64], and no sudo
# You would instead do:
# echo "deb [arch=amd64] http://storage.googleapis.com/tensorflow-serving-apt stable tensorflow-model-server tensorflow-model-server-universal" | sudo tee /etc/apt/sources.list.d/tensorflow-serving.list && \
# curl https://storage.googleapis.com/tensorflow-serving-apt/tensorflow-serving.release.pub.gpg | sudo apt-key add -
!echo "deb http://storage.googleapis.com/tensorflow-serving-apt stable tensorflow-model-server tensorflow-model-server-universal" | {SUDO_IF_NEEDED} tee /etc/apt/sources.list.d/tensorflow-serving.list && \
curl https://storage.googleapis.com/tensorflow-serving-apt/tensorflow-serving.release.pub.gpg | {SUDO_IF_NEEDED} apt-key add -
!{SUDO_IF_NEEDED} apt update
deb http://storage.googleapis.com/tensorflow-serving-apt stable tensorflow-model-server tensorflow-model-server-universal % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 2943 100 2943 0 0 15571 0 --:--:-- --:--:-- --:--:-- 15571 OK Hit:1 http://asia-east1.gce.archive.ubuntu.com/ubuntu bionic InRelease Hit:2 http://asia-east1.gce.archive.ubuntu.com/ubuntu bionic-updates InRelease Hit:3 http://asia-east1.gce.archive.ubuntu.com/ubuntu bionic-backports InRelease Hit:4 https://nvidia.github.io/libnvidia-container/stable/ubuntu18.04/amd64 InRelease Get:5 https://nvidia.github.io/nvidia-container-runtime/ubuntu18.04/amd64 InRelease [1481 B] Get:6 https://nvidia.github.io/nvidia-docker/ubuntu18.04/amd64 InRelease [1474 B] Ign:7 http://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 InRelease Get:8 http://storage.googleapis.com/tensorflow-serving-apt stable InRelease [3012 B] Hit:9 http://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 Release Get:10 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB] Get:11 https://packages.cloud.google.com/apt eip-cloud-bionic InRelease [5419 B] Get:12 http://packages.cloud.google.com/apt google-cloud-logging-wheezy InRelease [5483 B] Hit:13 http://archive.canonical.com/ubuntu bionic InRelease Err:11 https://packages.cloud.google.com/apt eip-cloud-bionic InRelease The following signatures couldn't be verified because the public key is not available: NO_PUBKEY FEEA9169307EA071 NO_PUBKEY 8B57C5C2836F4BEB Get:15 http://storage.googleapis.com/tensorflow-serving-apt stable/tensorflow-model-server amd64 Packages [339 B] Err:12 http://packages.cloud.google.com/apt google-cloud-logging-wheezy InRelease The following signatures couldn't be verified because the public key is not available: NO_PUBKEY FEEA9169307EA071 NO_PUBKEY 8B57C5C2836F4BEB Get:16 http://storage.googleapis.com/tensorflow-serving-apt stable/tensorflow-model-server-universal amd64 Packages [348 B] Fetched 106 kB in 1s (103 kB/s) 119 packages can be upgraded. Run 'apt list --upgradable' to see them. W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: https://packages.cloud.google.com/apt eip-cloud-bionic InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY FEEA9169307EA071 NO_PUBKEY 8B57C5C2836F4BEB W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: http://packages.cloud.google.com/apt google-cloud-logging-wheezy InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY FEEA9169307EA071 NO_PUBKEY 8B57C5C2836F4BEB W: Failed to fetch https://packages.cloud.google.com/apt/dists/eip-cloud-bionic/InRelease The following signatures couldn't be verified because the public key is not available: NO_PUBKEY FEEA9169307EA071 NO_PUBKEY 8B57C5C2836F4BEB W: Failed to fetch http://packages.cloud.google.com/apt/dists/google-cloud-logging-wheezy/InRelease The following signatures couldn't be verified because the public key is not available: NO_PUBKEY FEEA9169307EA071 NO_PUBKEY 8B57C5C2836F4BEB W: Some index files failed to download. They have been ignored, or old ones used instead.
Instale o TensorFlow Serving
Isso é tudo que você precisa - uma linha de comando!
{SUDO_IF_NEEDED} apt-get install tensorflow-model-server
The following packages were automatically installed and are no longer required: linux-gcp-5.4-headers-5.4.0-1040 linux-gcp-5.4-headers-5.4.0-1043 linux-gcp-5.4-headers-5.4.0-1044 linux-gcp-5.4-headers-5.4.0-1049 Use 'sudo apt autoremove' to remove them. The following NEW packages will be installed: tensorflow-model-server 0 upgraded, 1 newly installed, 0 to remove and 119 not upgraded. Need to get 335 MB of archives. After this operation, 0 B of additional disk space will be used. Get:1 http://storage.googleapis.com/tensorflow-serving-apt stable/tensorflow-model-server amd64 tensorflow-model-server all 2.7.0 [335 MB] Fetched 335 MB in 7s (45.2 MB/s) Selecting previously unselected package tensorflow-model-server. (Reading database ... 264341 files and directories currently installed.) Preparing to unpack .../tensorflow-model-server_2.7.0_all.deb ... Unpacking tensorflow-model-server (2.7.0) ... Setting up tensorflow-model-server (2.7.0) ...
Comece a executar o TensorFlow Serving
É aqui que começamos a executar o TensorFlow Serving e carregamos nosso modelo. Depois de carregar, podemos começar a fazer solicitações de inferência usando REST. Existem alguns parâmetros importantes:
-
rest_api_port
: A porta que você vai usar para solicitações REST. -
model_name
: Você vai usar isso na URL de pedidos REST. Pode ser qualquer coisa. -
model_base_path
: Este é o caminho para o diretório onde você salvou o seu modelo.
os.environ["MODEL_DIR"] = MODEL_DIR
nohup tensorflow_model_server \
--rest_api_port=8501 \
--model_name=fashion_model \
--model_base_path="${MODEL_DIR}" >server.log 2>&1
tail server.log
Faça uma solicitação ao seu modelo no TensorFlow Serving
Primeiro, vamos dar uma olhada em um exemplo aleatório de nossos dados de teste.
def show(idx, title):
plt.figure()
plt.imshow(test_images[idx].reshape(28,28))
plt.axis('off')
plt.title('\n\n{}'.format(title), fontdict={'size': 16})
import random
rando = random.randint(0,len(test_images)-1)
show(rando, 'An Example Image: {}'.format(class_names[test_labels[rando]]))
Ok, isso parece interessante. É difícil para você reconhecer isso? Agora vamos criar o objeto JSON para um lote de três solicitações de inferência e ver como nosso modelo reconhece as coisas:
import json
data = json.dumps({"signature_name": "serving_default", "instances": test_images[0:3].tolist()})
print('Data: {} ... {}'.format(data[:50], data[len(data)-52:]))
Data: {"signature_name": "serving_default", "instances": ... [0.0], [0.0], [0.0], [0.0], [0.0], [0.0], [0.0]]]]}
Faça solicitações REST
Versão mais recente do servable
Enviaremos uma solicitação de previsão como POST para o endpoint REST de nosso servidor e passaremos três exemplos. Pediremos ao nosso servidor para nos fornecer a versão mais recente do nosso serviço, não especificando uma versão particular.
# docs_infra: no_execute
!pip install -q requests
import requests
headers = {"content-type": "application/json"}
json_response = requests.post('http://localhost:8501/v1/models/fashion_model:predict', data=data, headers=headers)
predictions = json.loads(json_response.text)['predictions']
show(0, 'The model thought this was a {} (class {}), and it was actually a {} (class {})'.format(
class_names[np.argmax(predictions[0])], np.argmax(predictions[0]), class_names[test_labels[0]], test_labels[0]))
Uma versão particular do servable
Agora vamos especificar uma versão particular do nosso servable. Como temos apenas um, vamos selecionar a versão 1. Também examinaremos os três resultados.
# docs_infra: no_execute
headers = {"content-type": "application/json"}
json_response = requests.post('http://localhost:8501/v1/models/fashion_model/versions/1:predict', data=data, headers=headers)
predictions = json.loads(json_response.text)['predictions']
for i in range(0,3):
show(i, 'The model thought this was a {} (class {}), and it was actually a {} (class {})'.format(
class_names[np.argmax(predictions[i])], np.argmax(predictions[i]), class_names[test_labels[i]], test_labels[i]))