tf.tpu.experimental.StochasticGradientDescentParameters
Stay organized with collections
Save and categorize content based on your preferences.
Optimization parameters for stochastic gradient descent for TPU embeddings.
tf.tpu.experimental.StochasticGradientDescentParameters(
learning_rate, clip_weight_min=None, clip_weight_max=None
)
Pass this to tf.estimator.tpu.experimental.EmbeddingConfigSpec
via the
optimization_parameters
argument to set the optimizer and its parameters.
See the documentation for tf.estimator.tpu.experimental.EmbeddingConfigSpec
for more details.
estimator = tf.estimator.tpu.TPUEstimator(
...
embedding_config_spec=tf.estimator.tpu.experimental.EmbeddingConfigSpec(
...
optimization_parameters=(
tf.tpu.experimental.StochasticGradientDescentParameters(0.1))))
Args |
learning_rate
|
a floating point value. The learning rate.
|
clip_weight_min
|
the minimum value to clip by; None means -infinity.
|
clip_weight_max
|
the maximum value to clip by; None means +infinity.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2020-10-01 UTC.
[null,null,["Last updated 2020-10-01 UTC."],[],[],null,["# tf.tpu.experimental.StochasticGradientDescentParameters\n\n\u003cbr /\u003e\n\n|---------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/python/tpu/tpu_embedding.py#L377-L406) |\n\nOptimization parameters for stochastic gradient descent for TPU embeddings.\n\n#### View aliases\n\n\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n[`tf.compat.v1.tpu.experimental.StochasticGradientDescentParameters`](/api_docs/python/tf/compat/v1/tpu/experimental/StochasticGradientDescentParameters)\n\n\u003cbr /\u003e\n\n tf.tpu.experimental.StochasticGradientDescentParameters(\n learning_rate, clip_weight_min=None, clip_weight_max=None\n )\n\nPass this to [`tf.estimator.tpu.experimental.EmbeddingConfigSpec`](../../../tf/estimator/tpu/experimental/EmbeddingConfigSpec) via the\n`optimization_parameters` argument to set the optimizer and its parameters.\nSee the documentation for [`tf.estimator.tpu.experimental.EmbeddingConfigSpec`](../../../tf/estimator/tpu/experimental/EmbeddingConfigSpec)\nfor more details. \n\n estimator = tf.estimator.tpu.TPUEstimator(\n ...\n embedding_config_spec=tf.estimator.tpu.experimental.EmbeddingConfigSpec(\n ...\n optimization_parameters=(\n tf.tpu.experimental.StochasticGradientDescentParameters(0.1))))\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-------------------|-----------------------------------------------------|\n| `learning_rate` | a floating point value. The learning rate. |\n| `clip_weight_min` | the minimum value to clip by; None means -infinity. |\n| `clip_weight_max` | the maximum value to clip by; None means +infinity. |\n\n\u003cbr /\u003e"]]