View on TensorFlow.org | View on GitHub | Download notebook | Run in Kaggle |
This example is based on Keras-Tuner CIFAR10 sample to demonstrate how to run HP tuning jobs using TensorFlow Cloud and Google Cloud Platform at scale.
Import required modules
import datetime
import uuid
import numpy as np
import pandas as pd
import tensorflow as tf
import os
import sys
import subprocess
from tensorflow.keras import datasets, layers, models
from sklearn.model_selection import train_test_split
! pip install -q tensorflow-cloud
import tensorflow_cloud as tfc
tf.version.VERSION
'2.6.0'
Project Configurations
Set project parameters For Google Cloud Specific parameters refer to Google Cloud Project Setup Instructions.
# Set Google Cloud Specific parameters
# TODO: Please set GCP_PROJECT_ID to your own Google Cloud project ID.
GCP_PROJECT_ID = 'YOUR_PROJECT_ID'
# TODO: Change the Service Account Name to your own Service Account
SERVICE_ACCOUNT_NAME = 'YOUR_SERVICE_ACCOUNT_NAME'
SERVICE_ACCOUNT = f'{SERVICE_ACCOUNT_NAME}@{GCP_PROJECT_ID}.iam.gserviceaccount.com'
# TODO: set GCS_BUCKET to your own Google Cloud Storage (GCS) bucket.
GCS_BUCKET = 'YOUR_GCS_BUCKET_NAME'
# DO NOT CHANGE: Currently only the 'us-central1' region is supported.
REGION = 'us-central1'
# Set Tuning Specific parameters
# OPTIONAL: You can change the job name to any string.
JOB_NAME = 'cifar10'
# OPTIONAL: Set Number of concurrent tuning jobs that you would like to run.
NUM_JOBS = 5
# TODO: Set the study ID for this run. Study_ID can be any unique string.
# Reusing the same Study_ID will cause the Tuner to continue tuning the
# Same Study parameters. This can be used to continue on a terminated job,
# or load stats from a previous study.
STUDY_NUMBER = '00001'
STUDY_ID = f'{GCP_PROJECT_ID}_{JOB_NAME}_{STUDY_NUMBER}'
# Setting location were training logs and checkpoints will be stored
GCS_BASE_PATH = f'gs://{GCS_BUCKET}/{JOB_NAME}/{STUDY_ID}'
TENSORBOARD_LOGS_DIR = os.path.join(GCS_BASE_PATH,"logs")
Authenticating the notebook to use your Google Cloud Project
For Kaggle Notebooks click on "Add-ons"->"Google Cloud SDK" before running the cell below.
# Using tfc.remote() to ensure this code only runs in notebook
if not tfc.remote():
# Authentication for Kaggle Notebooks
if "kaggle_secrets" in sys.modules:
from kaggle_secrets import UserSecretsClient
UserSecretsClient().set_gcloud_credentials(project=GCP_PROJECT_ID)
# Authentication for Colab Notebooks
if "google.colab" in sys.modules:
from google.colab import auth
auth.authenticate_user()
os.environ["GOOGLE_CLOUD_PROJECT"] = GCP_PROJECT_ID
Load and prepare data
Read raw data and split to train and test data sets.
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data()
# Setting input specific parameters
# The model expects input of dimensions (INPUT_IMG_SIZE, INPUT_IMG_SIZE, 3)
INPUT_IMG_SIZE = 32
NUM_CLASSES = 10
Define the model architecture and hyperparameters
In this section we define our tuning parameters using Keras Tuner Hyper Parameters and a model-building function. The model-building function takes an argument hp from which you can sample hyperparameters, such as hp.Int('units', min_value=32, max_value=512, step=32) (an integer from a certain range).
import kerastuner
from tensorflow.keras import layers
# Configure the search space
HPS = kerastuner.engine.hyperparameters.HyperParameters()
HPS.Int('conv_blocks', 3, 5, default=3)
for i in range(5):
HPS.Int('filters_' + str(i), 32, 256, step=32)
HPS.Choice('pooling_' + str(i), ['avg', 'max'])
HPS.Int('hidden_size', 30, 100, step=10, default=50)
HPS.Float('dropout', 0, 0.5, step=0.1, default=0.5)
HPS.Float('learning_rate', 1e-4, 1e-2, sampling='log')
def build_model(hp):
inputs = tf.keras.Input(shape=(INPUT_IMG_SIZE, INPUT_IMG_SIZE, 3))
x = inputs
for i in range(hp.get('conv_blocks')):
filters = hp.get('filters_'+ str(i))
for _ in range(2):
x = layers.Conv2D(
filters, kernel_size=(3, 3), padding='same')(x)
x = layers.BatchNormalization()(x)
x = layers.ReLU()(x)
if hp.get('pooling_' + str(i)) == 'max':
x = layers.MaxPool2D()(x)
else:
x = layers.AvgPool2D()(x)
x = layers.GlobalAvgPool2D()(x)
x = layers.Dense(hp.get('hidden_size'),
activation='relu')(x)
x = layers.Dropout(hp.get('dropout'))(x)
outputs = layers.Dense(NUM_CLASSES, activation='softmax')(x)
model = tf.keras.Model(inputs, outputs)
model.compile(
optimizer=tf.keras.optimizers.Adam(
hp.get('learning_rate')),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
return model
Configure a CloudTuner
In this section we configure the cloud tuner for both remote and local execution. The main difference between the two is the distribution strategy.
from tensorflow_cloud import CloudTuner
distribution_strategy = None
if not tfc.remote():
# Using MirroredStrategy to use a single instance with multiple GPUs
# during remote execution while using no strategy for local.
distribution_strategy = tf.distribute.MirroredStrategy()
tuner = CloudTuner(
build_model,
project_id=GCP_PROJECT_ID,
project_name= JOB_NAME,
region=REGION,
objective='accuracy',
hyperparameters=HPS,
max_trials=100,
directory=GCS_BASE_PATH,
study_id=STUDY_ID,
overwrite=True,
distribution_strategy=distribution_strategy)
# Configure Tensorboard logs
callbacks=[
tf.keras.callbacks.TensorBoard(log_dir=TENSORBOARD_LOGS_DIR)]
# Setting to run tuning remotely, you can run tuner locally to validate it works first.
if tfc.remote():
tuner.search(x=x_train, y=y_train, epochs=30, validation_split=0.2, callbacks=callbacks)
# You can uncomment the code below to run the tuner.search() locally to validate
# everything works before submitting the job to Cloud. Stop the job manually
# after one epoch.
# else:
# tuner.search(x=x_train, y=y_train, epochs=1, validation_split=0.2, callbacks=callbacks)
Start the remote training
This step will prepare your code from this notebook for remote execution and start NUM_JOBS parallel runs remotely to train the model. Once the jobs are submitted you can go to the next step to monitor the jobs progress via Tensorboard.
# If you are using a custom image you can install modules via requirements txt file.
with open('requirements.txt','w') as f:
f.write('pandas==1.1.5\n')
f.write('numpy==1.18.5\n')
f.write('tensorflow-cloud\n')
f.write('keras-tuner\n')
# Optional: Some recommended base images. If you provide none the system will choose one for you.
TF_GPU_IMAGE= "tensorflow/tensorflow:latest-gpu"
TF_CPU_IMAGE= "tensorflow/tensorflow:latest"
tfc.run_cloudtuner(
distribution_strategy='auto',
requirements_txt='requirements.txt',
docker_config=tfc.DockerConfig(
parent_image=TF_GPU_IMAGE,
image_build_bucket=GCS_BUCKET
),
chief_config=tfc.COMMON_MACHINE_CONFIGS['K80_4X'],
job_labels={'job': JOB_NAME},
service_account=SERVICE_ACCOUNT,
num_jobs=NUM_JOBS
)
Training Results
Reconnect your Colab instance
Most remote training jobs are long running, if you are using Colab it may time out before the training results are available. In that case rerun the following sections to reconnect and configure your Colab instance to access the training results. Run the following sections in order:
- Import required modules
- Project Configurations
- Authenticating the notebook to use your Google Cloud Project
Load Tensorboard
While the training is in progress you can use Tensorboard to view the results. Note the results will show only after your training has started. This may take a few minutes.
%load_ext tensorboard
%tensorboard --logdir $TENSORBOARD_LOGS_DIR
You can access the training assets as follows. Note the results will show only after your tuning job has completed at least once trial. This may take a few minutes.
if not tfc.remote():
tuner.results_summary(1)
best_model = tuner.get_best_models(1)[0]
best_hyperparameters = tuner.get_best_hyperparameters(1)[0]
# References to best trial assets
best_trial_id = tuner.oracle.get_best_trials(1)[0].trial_id
best_trial_dir = tuner.get_trial_dir(best_trial_id)