TensorFlow.org-এ দেখুন | Google Colab-এ চালান | GitHub-এ উৎস দেখুন | নোটবুক ডাউনলোড করুন |
এই গাইডটি দেখায় কিভাবে একক-কর্মী মাল্টিপল-জিপিইউ ওয়ার্কফ্লোগুলিকে টেনসরফ্লো 1 থেকে টেনসরফ্লো 2-এ স্থানান্তর করা যায়।
একটি মেশিনে একাধিক জিপিইউ জুড়ে সিঙ্ক্রোনাস প্রশিক্ষণ সঞ্চালন করতে:
- TensorFlow 1 এ, আপনি
tf.distribute.MirroredStrategy
সহtf.estimator.Estimator
API ব্যবহার করেন। - TensorFlow 2-এ, আপনি Keras Model.fit বা
tf.distribute.MirroredStrategy
সহ একটি কাস্টম প্রশিক্ষণ লুপ ব্যবহার করতে পারেন। TensorFlow গাইড সহ বিতরণ করা প্রশিক্ষণে আরও জানুন।
সেটআপ
প্রদর্শনের উদ্দেশ্যে আমদানি এবং একটি সাধারণ ডেটাসেট দিয়ে শুরু করুন:
import tensorflow as tf
import tensorflow.compat.v1 as tf1
features = [[1., 1.5], [2., 2.5], [3., 3.5]]
labels = [[0.3], [0.5], [0.7]]
eval_features = [[4., 4.5], [5., 5.5], [6., 6.5]]
eval_labels = [[0.8], [0.9], [1.]]
টেনসরফ্লো 1: একক-কর্মী tf.estimator.Estimator-এর সাথে প্রশিক্ষণ বিতরণ করেছে
এই উদাহরণটি একক-কর্মী একাধিক-GPU প্রশিক্ষণের TensorFlow 1 ক্যানোনিকাল ওয়ার্কফ্লো প্রদর্শন করে। আপনাকে tf.estimator.Estimator-এর config
প্যারামিটারের মাধ্যমে বিতরণ কৌশল ( tf.distribute.MirroredStrategy
) সেট করতে tf.estimator.Estimator
:
def _input_fn():
return tf1.data.Dataset.from_tensor_slices((features, labels)).batch(1)
def _eval_input_fn():
return tf1.data.Dataset.from_tensor_slices(
(eval_features, eval_labels)).batch(1)
def _model_fn(features, labels, mode):
logits = tf1.layers.Dense(1)(features)
loss = tf1.losses.mean_squared_error(labels=labels, predictions=logits)
optimizer = tf1.train.AdagradOptimizer(0.05)
train_op = optimizer.minimize(loss, global_step=tf1.train.get_global_step())
return tf1.estimator.EstimatorSpec(mode, loss=loss, train_op=train_op)
strategy = tf1.distribute.MirroredStrategy()
config = tf1.estimator.RunConfig(
train_distribute=strategy, eval_distribute=strategy)
estimator = tf1.estimator.Estimator(model_fn=_model_fn, config=config)
train_spec = tf1.estimator.TrainSpec(input_fn=_input_fn)
eval_spec = tf1.estimator.EvalSpec(input_fn=_eval_input_fn)
tf1.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0',) INFO:tensorflow:Initializing RunConfig with distribution strategies. INFO:tensorflow:Not using Distribute Coordinator. WARNING:tensorflow:Using temporary folder as model directory: /tmp/tmp5g_f_ufk INFO:tensorflow:Using config: {'_model_dir': '/tmp/tmp5g_f_ufk', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': allow_soft_placement: true graph_options { rewrite_options { meta_optimizer_iterations: ONE } } , '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': <tensorflow.python.distribute.mirrored_strategy.MirroredStrategyV1 object at 0x7f6853562450>, '_device_fn': None, '_protocol': None, '_eval_distribute': <tensorflow.python.distribute.mirrored_strategy.MirroredStrategyV1 object at 0x7f6853562450>, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_checkpoint_save_graph_def': True, '_service': None, '_cluster_spec': ClusterSpec({}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1, '_distribute_coordinator_mode': None} INFO:tensorflow:Not using Distribute Coordinator. INFO:tensorflow:Running training and evaluation locally (non-distributed). INFO:tensorflow:Start train and evaluate loop. The evaluate will happen after every checkpoint. Checkpoint frequency is determined based on RunConfig arguments: save_checkpoints_steps None or save_checkpoints_secs 600. /tmpfs/src/tf_docs_env/lib/python3.7/site-packages/tensorflow/python/data/ops/dataset_ops.py:374: UserWarning: To make it possible to preserve tf.data options across serialization boundaries, their implementation has moved to be part of the TensorFlow graph. As a consequence, the options value is in general no longer known at graph construction time. Invoking this method in graph mode retains the legacy behavior of the original implementation, but note that the returned value might not reflect the actual value of the options. warnings.warn("To make it possible to preserve tf.data options across " INFO:tensorflow:Calling model_fn. WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.7/site-packages/tensorflow/python/training/adagrad.py:77: calling Constant.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version. Instructions for updating: Call initializer instance with the dtype argument instead of passing it to the constructor INFO:tensorflow:Done calling model_fn. INFO:tensorflow:Create CheckpointSaverHook. WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/util.py:95: DistributedIteratorV1.initialize (from tensorflow.python.distribute.input_lib) is deprecated and will be removed in a future version. Instructions for updating: Use the iterator's `initializer` property instead. INFO:tensorflow:Graph was finalized. INFO:tensorflow:Running local_init_op. INFO:tensorflow:Done running local_init_op. INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 0... INFO:tensorflow:Saving checkpoints for 0 into /tmp/tmp5g_f_ufk/model.ckpt. INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 0... 2021-09-22 20:15:43.228503: W tensorflow/core/grappler/utils/graph_view.cc:836] No registered 'MultiDeviceIteratorFromStringHandle' OpKernel for GPU devices compatible with node { {node MultiDeviceIteratorFromStringHandle} } . Registered: device='CPU' 2021-09-22 20:15:43.229960: W tensorflow/core/grappler/utils/graph_view.cc:836] No registered 'MultiDeviceIteratorGetNextFromShard' OpKernel for GPU devices compatible with node { {node MultiDeviceIteratorGetNextFromShard} } . Registered: device='CPU' INFO:tensorflow:loss = 0.14477473, step = 0 INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 3... INFO:tensorflow:Saving checkpoints for 3 into /tmp/tmp5g_f_ufk/model.ckpt. INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 3... INFO:tensorflow:Calling model_fn. INFO:tensorflow:Done calling model_fn. INFO:tensorflow:Reduce to /replica:0/task:0/device:CPU:0 then broadcast to ('/replica:0/task:0/device:CPU:0',). INFO:tensorflow:Reduce to /replica:0/task:0/device:CPU:0 then broadcast to ('/replica:0/task:0/device:CPU:0',). INFO:tensorflow:Reduce to /replica:0/task:0/device:CPU:0 then broadcast to ('/replica:0/task:0/device:CPU:0',). INFO:tensorflow:Reduce to /replica:0/task:0/device:CPU:0 then broadcast to ('/replica:0/task:0/device:CPU:0',). INFO:tensorflow:Starting evaluation at 2021-09-22T20:15:43 INFO:tensorflow:Graph was finalized. INFO:tensorflow:Restoring parameters from /tmp/tmp5g_f_ufk/model.ckpt-3 INFO:tensorflow:Running local_init_op. INFO:tensorflow:Done running local_init_op. INFO:tensorflow:Inference Time : 0.17626s INFO:tensorflow:Finished evaluation at 2021-09-22-20:15:44 INFO:tensorflow:Saving dict for global step 3: global_step = 3, loss = 1.1251448 INFO:tensorflow:Saving 'checkpoint_path' summary for global step 3: /tmp/tmp5g_f_ufk/model.ckpt-3 INFO:tensorflow:Loss for final step: 0.45722964. 2021-09-22 20:15:44.095116: W tensorflow/core/grappler/utils/graph_view.cc:836] No registered 'MultiDeviceIteratorFromStringHandle' OpKernel for GPU devices compatible with node { {node MultiDeviceIteratorFromStringHandle} } . Registered: device='CPU' 2021-09-22 20:15:44.096454: W tensorflow/core/grappler/utils/graph_view.cc:836] No registered 'MultiDeviceIteratorGetNextFromShard' OpKernel for GPU devices compatible with node { {node MultiDeviceIteratorGetNextFromShard} } . Registered: device='CPU' ({'loss': 1.1251448, 'global_step': 3}, [])
টেনসরফ্লো 2: কেরাসের সাথে একক-কর্মী প্রশিক্ষণ
TensorFlow 2 এ স্থানান্তরিত করার সময়, আপনি tf.distribute.MirroredStrategy এর সাথে tf.distribute.MirroredStrategy
API ব্যবহার করতে পারেন।
আপনি যদি মডেল বিল্ডিংয়ের জন্য tf.keras
APIs এবং প্রশিক্ষণের জন্য Model.fit
ব্যবহার করেন, তাহলে প্রধান পার্থক্য হল tf.estimator.Estimator
এর জন্য একটি config
সংজ্ঞায়িত করার পরিবর্তে, Strategy.scope
এর প্রেক্ষাপটে Keras মডেল, একটি অপ্টিমাইজার এবং মেট্রিক্সকে ইনস্ট্যান্ট করা tf.estimator.Estimator
আপনি যদি একটি কাস্টম প্রশিক্ষণ লুপ ব্যবহার করতে চান, কাস্টম প্রশিক্ষণ লুপ গাইড সহ tf.distribute.Strategy ব্যবহার করে দেখুন ।
dataset = tf.data.Dataset.from_tensor_slices((features, labels)).batch(1)
eval_dataset = tf.data.Dataset.from_tensor_slices(
(eval_features, eval_labels)).batch(1)
strategy = tf.distribute.MirroredStrategy()
with strategy.scope():
model = tf.keras.models.Sequential([tf.keras.layers.Dense(1)])
optimizer = tf.keras.optimizers.Adagrad(learning_rate=0.05)
model.compile(optimizer=optimizer, loss='mse')
model.fit(dataset)
model.evaluate(eval_dataset, return_dict=True)
INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0',) INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',). INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',). INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',). INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',). 2021-09-22 20:15:44.265351: W tensorflow/core/grappler/optimizers/data/auto_shard.cc:695] AUTO sharding policy will apply DATA sharding policy as it failed to apply FILE sharding policy because of the following reason: Found an unshardable source dataset: name: "TensorSliceDataset/_2" op: "TensorSliceDataset" input: "Placeholder/_0" input: "Placeholder/_1" attr { key: "Toutput_types" value { list { type: DT_FLOAT type: DT_FLOAT } } } attr { key: "output_shapes" value { list { shape { dim { size: 2 } } shape { dim { size: 1 } } } } } INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',). INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',). 3/3 [==============================] - 2s 3ms/step - loss: 0.2363 2021-09-22 20:15:46.836745: W tensorflow/core/grappler/optimizers/data/auto_shard.cc:695] AUTO sharding policy will apply DATA sharding policy as it failed to apply FILE sharding policy because of the following reason: Found an unshardable source dataset: name: "TensorSliceDataset/_2" op: "TensorSliceDataset" input: "Placeholder/_0" input: "Placeholder/_1" attr { key: "Toutput_types" value { list { type: DT_FLOAT type: DT_FLOAT } } } attr { key: "output_shapes" value { list { shape { dim { size: 2 } } shape { dim { size: 1 } } } } } 3/3 [==============================] - 1s 3ms/step - loss: 0.0079 {'loss': 0.007883546873927116}
পরবর্তী পদক্ষেপ
TensorFlow 2-এ tf.distribute.MirroredStrategy
এর সাথে বিতরণ করা প্রশিক্ষণ সম্পর্কে আরও জানতে, নিম্নলিখিত ডকুমেন্টেশন দেখুন: