Keras 的分布式训练

在 tensorflow.google.cn 上查看 在 Google Colab 运行 在 Github 上查看源代码 下载此 notebook

概述

tf.distribute.Strategy API 提供了一个抽象的 API ,用于跨多个处理单元(processing units)分布式训练。它的目的是允许用户使用现有模型和训练代码,只需要很少的修改,就可以启用分布式训练。

本教程使用 tf.distribute.MirroredStrategy,这是在一台计算机上的多 GPU(单机多卡)进行同时训练的图形内复制(in-graph replication)。事实上,它会将所有模型的变量复制到每个处理器上,然后,通过使用 all-reduce 去整合所有处理器的梯度(gradients),并将整合的结果应用于所有副本之中。

MirroredStategy 是 tensorflow 中可用的几种分发策略之一。 您可以在 分发策略指南 中阅读更多分发策略。

Keras API

这个例子使用 tf.keras API 去构建和训练模型。 关于自定义训练模型,请参阅 tf.distribute.Strategy with training loops 教程。

导入依赖

# 导入 TensorFlow 和 TensorFlow 数据集

import tensorflow_datasets as tfds
import tensorflow as tf
tfds.disable_progress_bar()

import os
print(tf.__version__)
2.9.1

下载数据集

下载 MNIST 数据集并从 TensorFlow Datasets 加载。 这会返回 tf.data 格式的数据集。

with_info 设置为 True 会包含整个数据集的元数据,其中这些数据集将保存在 info 中。 除此之外,该元数据对象包括训练和测试示例的数量。

datasets, info = tfds.load(name='mnist', with_info=True, as_supervised=True)

mnist_train, mnist_test = datasets['train'], datasets['test']

定义分配策略

创建一个 MirroredStrategy 对象。这将处理分配策略,并提供一个上下文管理器(tf.distribute.MirroredStrategy.scope)来构建你的模型。

strategy = tf.distribute.MirroredStrategy()
INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1', '/job:localhost/replica:0/task:0/device:GPU:2', '/job:localhost/replica:0/task:0/device:GPU:3')
INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1', '/job:localhost/replica:0/task:0/device:GPU:2', '/job:localhost/replica:0/task:0/device:GPU:3')
print('Number of devices: {}'.format(strategy.num_replicas_in_sync))
Number of devices: 4

设置输入管道(pipeline)

在训练具有多个 GPU 的模型时,您可以通过增加批量大小(batch size)来有效地使用额外的计算能力。通常来说,使用适合 GPU 内存的最大批量大小(batch size),并相应地调整学习速率。

# 您还可以执行 info.splits.total_num_examples 来获取总数
# 数据集中的样例数量。

num_train_examples = info.splits['train'].num_examples
num_test_examples = info.splits['test'].num_examples

BUFFER_SIZE = 10000

BATCH_SIZE_PER_REPLICA = 64
BATCH_SIZE = BATCH_SIZE_PER_REPLICA * strategy.num_replicas_in_sync

0-255 的像素值, 必须标准化到 0-1 范围。在函数中定义标准化。

def scale(image, label):
  image = tf.cast(image, tf.float32)
  image /= 255

  return image, label

将此功能应用于训练和测试数据,随机打乱训练数据,并批量训练。 请注意,我们还保留了训练数据的内存缓存以提高性能。

train_dataset = mnist_train.map(scale).cache().shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
eval_dataset = mnist_test.map(scale).batch(BATCH_SIZE)

生成模型

strategy.scope 的上下文中创建和编译 Keras 模型。

with strategy.scope():
  model = tf.keras.Sequential([
      tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)),
      tf.keras.layers.MaxPooling2D(),
      tf.keras.layers.Flatten(),
      tf.keras.layers.Dense(64, activation='relu'),
      tf.keras.layers.Dense(10)
  ])

  model.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
                optimizer=tf.keras.optimizers.Adam(),
                metrics=['accuracy'])
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).

定义回调(callback)

这里使用的回调(callbacks)是:

  • TensorBoard: 此回调(callbacks)为 TensorBoard 写入日志,允许您可视化图形。
  • Model Checkpoint: 此回调(callbacks)在每个 epoch 后保存模型。
  • Learning Rate Scheduler: 使用此回调(callbacks),您可以安排学习率在每个 epoch/batch 之后更改。

为了便于说明,添加打印回调(callbacks)以在笔记本中显示学习率

# 定义检查点(checkpoint)目录以存储检查点(checkpoints)

checkpoint_dir = './training_checkpoints'
# 检查点(checkpoint)文件的名称
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt_{epoch}")
# 衰减学习率的函数。
# 您可以定义所需的任何衰减函数。
def decay(epoch):
  if epoch < 3:
    return 1e-3
  elif epoch >= 3 and epoch < 7:
    return 1e-4
  else:
    return 1e-5
# 在每个 epoch 结束时打印LR的回调(callbacks)。
class PrintLR(tf.keras.callbacks.Callback):
  def on_epoch_end(self, epoch, logs=None):
    print('\nLearning rate for epoch {} is {}'.format(epoch + 1,
                                                      model.optimizer.lr.numpy()))
callbacks = [
    tf.keras.callbacks.TensorBoard(log_dir='./logs'),
    tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_prefix,
                                       save_weights_only=True),
    tf.keras.callbacks.LearningRateScheduler(decay),
    PrintLR()
]

训练和评估

在该部分,以普通的方式训练模型,在模型上调用 fit 并传入在教程开始时创建的数据集。 无论您是否分布式训练,此步骤都是相同的。

model.fit(train_dataset, epochs=12, callbacks=callbacks)
2022-06-03 20:04:20.188688: W tensorflow/core/grappler/optimizers/data/auto_shard.cc:547] The `assert_cardinality` transformation is currently not handled by the auto-shard rewrite and will be removed.
Epoch 1/12
INFO:tensorflow:batch_all_reduce: 6 all-reduces with algorithm = nccl, num_packs = 1
INFO:tensorflow:batch_all_reduce: 6 all-reduces with algorithm = nccl, num_packs = 1
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:batch_all_reduce: 6 all-reduces with algorithm = nccl, num_packs = 1
INFO:tensorflow:batch_all_reduce: 6 all-reduces with algorithm = nccl, num_packs = 1
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
1/235 [..............................] - ETA: 31:58 - loss: 2.3166 - accuracy: 0.0273WARNING:tensorflow:Callback method `on_train_batch_end` is slow compared to the batch time (batch time: 0.0078s vs `on_train_batch_end` time: 0.0109s). Check your callbacks.
WARNING:tensorflow:Callback method `on_train_batch_end` is slow compared to the batch time (batch time: 0.0078s vs `on_train_batch_end` time: 0.0109s). Check your callbacks.
231/235 [============================>.] - ETA: 0s - loss: 0.3427 - accuracy: 0.9071
Learning rate for epoch 1 is 0.0010000000474974513
235/235 [==============================] - 10s 9ms/step - loss: 0.3395 - accuracy: 0.9079 - lr: 0.0010
Epoch 2/12
230/235 [============================>.] - ETA: 0s - loss: 0.1035 - accuracy: 0.9701
Learning rate for epoch 2 is 0.0010000000474974513
235/235 [==============================] - 2s 7ms/step - loss: 0.1029 - accuracy: 0.9703 - lr: 0.0010
Epoch 3/12
230/235 [============================>.] - ETA: 0s - loss: 0.0687 - accuracy: 0.9806
Learning rate for epoch 3 is 0.0010000000474974513
235/235 [==============================] - 2s 7ms/step - loss: 0.0684 - accuracy: 0.9806 - lr: 0.0010
Epoch 4/12
232/235 [============================>.] - ETA: 0s - loss: 0.0486 - accuracy: 0.9865
Learning rate for epoch 4 is 9.999999747378752e-05
235/235 [==============================] - 2s 7ms/step - loss: 0.0484 - accuracy: 0.9866 - lr: 1.0000e-04
Epoch 5/12
232/235 [============================>.] - ETA: 0s - loss: 0.0456 - accuracy: 0.9876
Learning rate for epoch 5 is 9.999999747378752e-05
235/235 [==============================] - 2s 7ms/step - loss: 0.0456 - accuracy: 0.9876 - lr: 1.0000e-04
Epoch 6/12
232/235 [============================>.] - ETA: 0s - loss: 0.0441 - accuracy: 0.9879
Learning rate for epoch 6 is 9.999999747378752e-05
235/235 [==============================] - 2s 7ms/step - loss: 0.0440 - accuracy: 0.9879 - lr: 1.0000e-04
Epoch 7/12
232/235 [============================>.] - ETA: 0s - loss: 0.0428 - accuracy: 0.9882
Learning rate for epoch 7 is 9.999999747378752e-05
235/235 [==============================] - 2s 7ms/step - loss: 0.0428 - accuracy: 0.9883 - lr: 1.0000e-04
Epoch 8/12
230/235 [============================>.] - ETA: 0s - loss: 0.0406 - accuracy: 0.9892
Learning rate for epoch 8 is 9.999999747378752e-06
235/235 [==============================] - 2s 7ms/step - loss: 0.0405 - accuracy: 0.9893 - lr: 1.0000e-05
Epoch 9/12
232/235 [============================>.] - ETA: 0s - loss: 0.0403 - accuracy: 0.9895
Learning rate for epoch 9 is 9.999999747378752e-06
235/235 [==============================] - 2s 7ms/step - loss: 0.0402 - accuracy: 0.9895 - lr: 1.0000e-05
Epoch 10/12
230/235 [============================>.] - ETA: 0s - loss: 0.0403 - accuracy: 0.9894
Learning rate for epoch 10 is 9.999999747378752e-06
235/235 [==============================] - 2s 7ms/step - loss: 0.0400 - accuracy: 0.9895 - lr: 1.0000e-05
Epoch 11/12
232/235 [============================>.] - ETA: 0s - loss: 0.0401 - accuracy: 0.9896
Learning rate for epoch 11 is 9.999999747378752e-06
235/235 [==============================] - 2s 7ms/step - loss: 0.0399 - accuracy: 0.9896 - lr: 1.0000e-05
Epoch 12/12
231/235 [============================>.] - ETA: 0s - loss: 0.0397 - accuracy: 0.9895
Learning rate for epoch 12 is 9.999999747378752e-06
235/235 [==============================] - 2s 7ms/step - loss: 0.0397 - accuracy: 0.9895 - lr: 1.0000e-05
<keras.callbacks.History at 0x7f644fd333d0>

如下所示,检查点(checkpoint)将被保存。

# 检查检查点(checkpoint)目录
ls {checkpoint_dir}
checkpoint           ckpt_4.data-00000-of-00001
ckpt_1.data-00000-of-00001   ckpt_4.index
ckpt_1.index             ckpt_5.data-00000-of-00001
ckpt_10.data-00000-of-00001  ckpt_5.index
ckpt_10.index            ckpt_6.data-00000-of-00001
ckpt_11.data-00000-of-00001  ckpt_6.index
ckpt_11.index            ckpt_7.data-00000-of-00001
ckpt_12.data-00000-of-00001  ckpt_7.index
ckpt_12.index            ckpt_8.data-00000-of-00001
ckpt_2.data-00000-of-00001   ckpt_8.index
ckpt_2.index             ckpt_9.data-00000-of-00001
ckpt_3.data-00000-of-00001   ckpt_9.index
ckpt_3.index

要查看模型的执行方式,请加载最新的检查点(checkpoint)并在测试数据上调用 evaluate

使用适当的数据集调用 evaluate

model.load_weights(tf.train.latest_checkpoint(checkpoint_dir))

eval_loss, eval_acc = model.evaluate(eval_dataset)

print('Eval loss: {}, Eval Accuracy: {}'.format(eval_loss, eval_acc))
2022-06-03 20:04:52.304110: W tensorflow/core/grappler/optimizers/data/auto_shard.cc:547] The `assert_cardinality` transformation is currently not handled by the auto-shard rewrite and will be removed.
40/40 [==============================] - 3s 6ms/step - loss: 0.0530 - accuracy: 0.9831
Eval loss: 0.05301585793495178, Eval Accuracy: 0.9830999970436096

要查看输出,您可以在终端下载并查看 TensorBoard 日志。

$ tensorboard --logdir=path/to/log-directory
ls -sh ./logs
total 4.0K
4.0K train

导出到 SavedModel

将图形和变量导出为与平台无关的 SavedModel 格式。 保存模型后,可以在有或没有 scope 的情况下加载模型。

path = 'saved_model/'
model.save(path, save_format='tf')
WARNING:absl:Found untraced functions such as _jit_compiled_convolution_op while saving (showing 1 of 1). These functions will not be directly callable after loading.
INFO:tensorflow:Assets written to: saved_model/assets
INFO:tensorflow:Assets written to: saved_model/assets

在无需 strategy.scope 加载模型。

unreplicated_model = tf.keras.models.load_model(path)

unreplicated_model.compile(
    loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
    optimizer=tf.keras.optimizers.Adam(),
    metrics=['accuracy'])

eval_loss, eval_acc = unreplicated_model.evaluate(eval_dataset)

print('Eval loss: {}, Eval Accuracy: {}'.format(eval_loss, eval_acc))
40/40 [==============================] - 0s 3ms/step - loss: 0.0530 - accuracy: 0.9831
Eval loss: 0.053015850484371185, Eval Accuracy: 0.9830999970436096

在含 strategy.scope 加载模型。

with strategy.scope():
  replicated_model = tf.keras.models.load_model(path)
  replicated_model.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
                           optimizer=tf.keras.optimizers.Adam(),
                           metrics=['accuracy'])

  eval_loss, eval_acc = replicated_model.evaluate(eval_dataset)
  print ('Eval loss: {}, Eval Accuracy: {}'.format(eval_loss, eval_acc))
2022-06-03 20:04:57.343255: W tensorflow/core/grappler/optimizers/data/auto_shard.cc:547] The `assert_cardinality` transformation is currently not handled by the auto-shard rewrite and will be removed.
40/40 [==============================] - 4s 5ms/step - loss: 0.0530 - accuracy: 0.9831
Eval loss: 0.05301585793495178, Eval Accuracy: 0.9830999970436096

示例和教程

以下是使用 keras fit/compile 分布式策略的一些示例:

  1. 使用tf.distribute.MirroredStrategy 训练 Transformer 的示例。
  2. 使用tf.distribute.MirroredStrategy 训练 NCF 的示例。

分布式策略指南中列出的更多示例

下一步

注意:tf.distribute.Strategy 正在积极开发中,我们将在不久的将来添加更多示例和教程。欢迎您进行尝试。我们欢迎您通过 GitHub 上的 issue 提供反馈。