With the deprecation of global graphs, TF no longer tracks variables in
collections. In other words, there are no global variables in TF2. Thus, the
global step functions have been removed (get_or_create_global_step,
create_global_step, get_global_step) . You have two options for migrating:
Create a Keras optimizer, which generates an iterations variable. This
variable is automatically incremented when calling apply_gradients.
[null,null,["Last updated 2024-04-26 UTC."],[],[],null,["# tf.compat.v1.train.create_global_step\n\n\u003cbr /\u003e\n\n|--------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.16.1/tensorflow/python/training/training_util.py#L164-L255) |\n\nCreate global step tensor in graph. \n\n tf.compat.v1.train.create_global_step(\n graph=None\n )\n\n\u003cbr /\u003e\n\nMigrate to TF2\n--------------\n\n\u003cbr /\u003e\n\n| **Caution:** This API was designed for TensorFlow v1. Continue reading for details on how to migrate from this API to a native TensorFlow v2 equivalent. See the [TensorFlow v1 to TensorFlow v2 migration guide](https://www.tensorflow.org/guide/migrate) for instructions on how to migrate the rest of your code.\n\nWith the deprecation of global graphs, TF no longer tracks variables in\ncollections. In other words, there are no global variables in TF2. Thus, the\nglobal step functions have been removed (`get_or_create_global_step`,\n`create_global_step`, `get_global_step`) . You have two options for migrating:\n\n1. Create a Keras optimizer, which generates an `iterations` variable. This variable is automatically incremented when calling `apply_gradients`.\n2. Manually create and increment a [`tf.Variable`](../../../../tf/Variable).\n\nBelow is an example of migrating away from using a global step to using a\nKeras optimizer:\n\nDefine a dummy model and loss: \n\n def compute_loss(x):\n v = tf.Variable(3.0)\n y = x * v\n loss = x * 5 - x * v\n return loss, [v]\n\nBefore migrating: \n\n g = tf.Graph()\n with g.as_default():\n x = tf.compat.v1.placeholder(tf.float32, [])\n loss, var_list = compute_loss(x)\n global_step = tf.compat.v1.train.create_global_step()\n global_init = tf.compat.v1.global_variables_initializer()\n optimizer = tf.compat.v1.train.GradientDescentOptimizer(0.1)\n train_op = optimizer.minimize(loss, global_step, var_list)\n sess = tf.compat.v1.Session(graph=g)\n sess.run(global_init)\n print(\"before training:\", sess.run(global_step))\n before training: 0\n sess.run(train_op, feed_dict={x: 3})\n print(\"after training:\", sess.run(global_step))\n after training: 1\n\nMigrating to a Keras optimizer: \n\n optimizer = tf.keras.optimizers.SGD(.01)\n print(\"before training:\", optimizer.iterations.numpy())\n before training: 0\n with tf.GradientTape() as tape:\n loss, var_list = compute_loss(3)\n grads = tape.gradient(loss, var_list)\n optimizer.apply_gradients(zip(grads, var_list))\n print(\"after training:\", optimizer.iterations.numpy())\n after training: 1\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\nDescription\n-----------\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|---------|-------------------------------------------------------------------------------------|\n| `graph` | The graph in which to create the global step tensor. If missing, use default graph. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| Global step tensor. ||\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|--------------|-------------------------------------------|\n| `ValueError` | if global step tensor is already defined. |\n\n\u003cbr /\u003e"]]