tf.estimator.tpu.experimental.EmbeddingConfigSpec
Stay organized with collections
Save and categorize content based on your preferences.
Class to keep track of the specification for TPU embeddings.
tf.estimator.tpu.experimental.EmbeddingConfigSpec(
feature_columns=None, optimization_parameters=None, clipping_limit=None,
pipeline_execution_with_tensor_core=False,
experimental_gradient_multiplier_fn=None, feature_to_config_dict=None,
table_to_config_dict=None, partition_strategy='div'
)
Pass this class to tf.estimator.tpu.TPUEstimator
via the
embedding_config_spec
parameter. At minimum you need to specify
feature_columns
and optimization_parameters
. The feature columns passed
should be created with some combination of
tf.tpu.experimental.embedding_column
and
tf.tpu.experimental.shared_embedding_columns
.
TPU embeddings do not support arbitrary Tensorflow optimizers and the
main optimizer you use for your model will be ignored for the embedding table
variables. Instead TPU embeddigns support a fixed set of predefined optimizers
that you can select from and set the parameters of. These include adagrad,
adam and stochastic gradient descent. Each supported optimizer has a
Parameters
class in the tf.tpu.experimental
namespace.
column_a = tf.feature_column.categorical_column_with_identity(...)
column_b = tf.feature_column.categorical_column_with_identity(...)
column_c = tf.feature_column.categorical_column_with_identity(...)
tpu_shared_columns = tf.tpu.experimental.shared_embedding_columns(
[column_a, column_b], 10)
tpu_non_shared_column = tf.tpu.experimental.embedding_column(
column_c, 10)
tpu_columns = [tpu_non_shared_column] + tpu_shared_columns
...
def model_fn(features):
dense_features = tf.keras.layers.DenseFeature(tpu_columns)
embedded_feature = dense_features(features)
...
estimator = tf.estimator.tpu.TPUEstimator(
model_fn=model_fn,
...
embedding_config_spec=tf.estimator.tpu.experimental.EmbeddingConfigSpec(
column=tpu_columns,
optimization_parameters=(
tf.estimator.tpu.experimental.AdagradParameters(0.1))))
<!-- Tabular view -->
<table class="responsive fixed orange">
<colgroup><col width="214px"><col></colgroup>
<tr><th colspan="2"><h2 class="add-link">Args</h2></th></tr>
<tr>
<td>
`feature_columns`
</td>
<td>
All embedding `FeatureColumn`s used by model.
</td>
</tr><tr>
<td>
`optimization_parameters`
</td>
<td>
An instance of `AdagradParameters`,
`AdamParameters` or `StochasticGradientDescentParameters`. This
optimizer will be applied to all embedding variables specified by
`feature_columns`.
</td>
</tr><tr>
<td>
`clipping_limit`
</td>
<td>
(Optional) Clipping limit (absolute value).
</td>
</tr><tr>
<td>
`pipeline_execution_with_tensor_core`
</td>
<td>
setting this to `True` makes training
faster, but trained model will be different if step N and step N+1
involve the same set of embedding IDs. Please see
`tpu_embedding_configuration.proto` for details.
</td>
</tr><tr>
<td>
`experimental_gradient_multiplier_fn`
</td>
<td>
(Optional) A Fn taking global step as
input returning the current multiplier for all embedding gradients.
</td>
</tr><tr>
<td>
`feature_to_config_dict`
</td>
<td>
A dictionary mapping features names to instances
of the class `FeatureConfig`. Either features_columns or the pair of
`feature_to_config_dict` and `table_to_config_dict` must be specified.
</td>
</tr><tr>
<td>
`table_to_config_dict`
</td>
<td>
A dictionary mapping features names to instances of
the class `TableConfig`. Either features_columns or the pair of
`feature_to_config_dict` and `table_to_config_dict` must be specified.
</td>
</tr><tr>
<td>
`partition_strategy`
</td>
<td>
A string, determining how tensors are sharded to the
tpu hosts. See <a href="../../../../tf/nn/safe_embedding_lookup_sparse"><code>tf.nn.safe_embedding_lookup_sparse</code></a> for more details.
Allowed value are `"div"` and `"mod"'. If `"mod"` is used, evaluation
and exporting the model to CPU will not work as expected.
</td>
</tr>
</table>
<!-- Tabular view -->
<table class="responsive fixed orange">
<colgroup><col width="214px"><col></colgroup>
<tr><th colspan="2"><h2 class="add-link">Raises</h2></th></tr>
<tr>
<td>
`ValueError`
</td>
<td>
If the feature_columns are not specified.
</td>
</tr><tr>
<td>
`TypeError`
</td>
<td>
If the feature columns are not of ths correct type (one of
_SUPPORTED_FEATURE_COLUMNS, _TPU_EMBEDDING_COLUMN_CLASSES OR
_EMBEDDING_COLUMN_CLASSES).
</td>
</tr><tr>
<td>
`ValueError`
</td>
<td>
If `optimization_parameters` is not one of the required types.
</td>
</tr>
</table>
<!-- Tabular view -->
<table class="responsive fixed orange">
<colgroup><col width="214px"><col></colgroup>
<tr><th colspan="2"><h2 class="add-link">Attributes</h2></th></tr>
<tr>
<td>
`feature_columns`
</td>
<td>
</td>
</tr><tr>
<td>
`optimization_parameters`
</td>
<td>
</td>
</tr><tr>
<td>
`clipping_limit`
</td>
<td>
</td>
</tr><tr>
<td>
`pipeline_execution_with_tensor_core`
</td>
<td>
</td>
</tr><tr>
<td>
`experimental_gradient_multiplier_fn`
</td>
<td>
</td>
</tr><tr>
<td>
`feature_to_config_dict`
</td>
<td>
</td>
</tr><tr>
<td>
`table_to_config_dict`
</td>
<td>
</td>
</tr><tr>
<td>
`partition_strategy`
</td>
<td>
</td>
</tr>
</table>
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2020-10-01 UTC.
[null,null,["Last updated 2020-10-01 UTC."],[],[],null,["# tf.estimator.tpu.experimental.EmbeddingConfigSpec\n\n\u003cbr /\u003e\n\n|----------------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/tpu/_tpu_estimator_embedding.py) |\n\nClass to keep track of the specification for TPU embeddings.\n\n#### View aliases\n\n\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n[`tf.compat.v1.estimator.tpu.experimental.EmbeddingConfigSpec`](/api_docs/python/tf/compat/v1/estimator/tpu/experimental/EmbeddingConfigSpec)\n\n\u003cbr /\u003e\n\n tf.estimator.tpu.experimental.EmbeddingConfigSpec(\n feature_columns=None, optimization_parameters=None, clipping_limit=None,\n pipeline_execution_with_tensor_core=False,\n experimental_gradient_multiplier_fn=None, feature_to_config_dict=None,\n table_to_config_dict=None, partition_strategy='div'\n )\n\nPass this class to [`tf.estimator.tpu.TPUEstimator`](../../../../tf/estimator/tpu/TPUEstimator) via the\n`embedding_config_spec` parameter. At minimum you need to specify\n`feature_columns` and `optimization_parameters`. The feature columns passed\nshould be created with some combination of\n[`tf.tpu.experimental.embedding_column`](../../../../tf/tpu/experimental/embedding_column) and\n[`tf.tpu.experimental.shared_embedding_columns`](../../../../tf/tpu/experimental/shared_embedding_columns).\n\nTPU embeddings do not support arbitrary Tensorflow optimizers and the\nmain optimizer you use for your model will be ignored for the embedding table\nvariables. Instead TPU embeddigns support a fixed set of predefined optimizers\nthat you can select from and set the parameters of. These include adagrad,\nadam and stochastic gradient descent. Each supported optimizer has a\n`Parameters` class in the [`tf.tpu.experimental`](../../../../tf/tpu/experimental) namespace. \n\n column_a = tf.feature_column.categorical_column_with_identity(...)\n column_b = tf.feature_column.categorical_column_with_identity(...)\n column_c = tf.feature_column.categorical_column_with_identity(...)\n tpu_shared_columns = tf.tpu.experimental.shared_embedding_columns(\n [column_a, column_b], 10)\n tpu_non_shared_column = tf.tpu.experimental.embedding_column(\n column_c, 10)\n tpu_columns = [tpu_non_shared_column] + tpu_shared_columns\n ...\n def model_fn(features):\n dense_features = tf.keras.layers.DenseFeature(tpu_columns)\n embedded_feature = dense_features(features)\n ...\n\n estimator = tf.estimator.tpu.TPUEstimator(\n model_fn=model_fn,\n ...\n embedding_config_spec=tf.estimator.tpu.experimental.EmbeddingConfigSpec(\n column=tpu_columns,\n optimization_parameters=(\n tf.estimator.tpu.experimental.AdagradParameters(0.1))))\n\n \u003c!-- Tabular view --\u003e\n \u003ctable class=\"responsive fixed orange\"\u003e\n \u003ccolgroup\u003e\u003ccol width=\"214px\"\u003e\u003ccol\u003e\u003c/colgroup\u003e\n \u003ctr\u003e\u003cth colspan=\"2\"\u003e\u003ch2 class=\"add-link\"\u003eArgs\u003c/h2\u003e\u003c/th\u003e\u003c/tr\u003e\n\n \u003ctr\u003e\n \u003ctd\u003e\n `feature_columns`\n \u003c/td\u003e\n \u003ctd\u003e\n All embedding `FeatureColumn`s used by model.\n \u003c/td\u003e\n \u003c/tr\u003e\u003ctr\u003e\n \u003ctd\u003e\n `optimization_parameters`\n \u003c/td\u003e\n \u003ctd\u003e\n An instance of `AdagradParameters`,\n `AdamParameters` or `StochasticGradientDescentParameters`. This\n optimizer will be applied to all embedding variables specified by\n `feature_columns`.\n \u003c/td\u003e\n \u003c/tr\u003e\u003ctr\u003e\n \u003ctd\u003e\n `clipping_limit`\n \u003c/td\u003e\n \u003ctd\u003e\n (Optional) Clipping limit (absolute value).\n \u003c/td\u003e\n \u003c/tr\u003e\u003ctr\u003e\n \u003ctd\u003e\n `pipeline_execution_with_tensor_core`\n \u003c/td\u003e\n \u003ctd\u003e\n setting this to `True` makes training\n faster, but trained model will be different if step N and step N+1\n involve the same set of embedding IDs. Please see\n `tpu_embedding_configuration.proto` for details.\n \u003c/td\u003e\n \u003c/tr\u003e\u003ctr\u003e\n \u003ctd\u003e\n `experimental_gradient_multiplier_fn`\n \u003c/td\u003e\n \u003ctd\u003e\n (Optional) A Fn taking global step as\n input returning the current multiplier for all embedding gradients.\n \u003c/td\u003e\n \u003c/tr\u003e\u003ctr\u003e\n \u003ctd\u003e\n `feature_to_config_dict`\n \u003c/td\u003e\n \u003ctd\u003e\n A dictionary mapping features names to instances\n of the class `FeatureConfig`. Either features_columns or the pair of\n `feature_to_config_dict` and `table_to_config_dict` must be specified.\n \u003c/td\u003e\n \u003c/tr\u003e\u003ctr\u003e\n \u003ctd\u003e\n `table_to_config_dict`\n \u003c/td\u003e\n \u003ctd\u003e\n A dictionary mapping features names to instances of\n the class `TableConfig`. Either features_columns or the pair of\n `feature_to_config_dict` and `table_to_config_dict` must be specified.\n \u003c/td\u003e\n \u003c/tr\u003e\u003ctr\u003e\n \u003ctd\u003e\n `partition_strategy`\n \u003c/td\u003e\n \u003ctd\u003e\n A string, determining how tensors are sharded to the\n tpu hosts. See \u003ca href=\"../../../../tf/nn/safe_embedding_lookup_sparse\"\u003e\u003ccode\u003etf.nn.safe_embedding_lookup_sparse\u003c/code\u003e\u003c/a\u003e for more details.\n Allowed value are `\"div\"` and `\"mod\"'. If `\"mod\"` is used, evaluation\n and exporting the model to CPU will not work as expected.\n \u003c/td\u003e\n \u003c/tr\u003e\n \u003c/table\u003e\n\n\n\n \u003c!-- Tabular view --\u003e\n \u003ctable class=\"responsive fixed orange\"\u003e\n \u003ccolgroup\u003e\u003ccol width=\"214px\"\u003e\u003ccol\u003e\u003c/colgroup\u003e\n \u003ctr\u003e\u003cth colspan=\"2\"\u003e\u003ch2 class=\"add-link\"\u003eRaises\u003c/h2\u003e\u003c/th\u003e\u003c/tr\u003e\n\n \u003ctr\u003e\n \u003ctd\u003e\n `ValueError`\n \u003c/td\u003e\n \u003ctd\u003e\n If the feature_columns are not specified.\n \u003c/td\u003e\n \u003c/tr\u003e\u003ctr\u003e\n \u003ctd\u003e\n `TypeError`\n \u003c/td\u003e\n \u003ctd\u003e\n If the feature columns are not of ths correct type (one of\n _SUPPORTED_FEATURE_COLUMNS, _TPU_EMBEDDING_COLUMN_CLASSES OR\n _EMBEDDING_COLUMN_CLASSES).\n \u003c/td\u003e\n \u003c/tr\u003e\u003ctr\u003e\n \u003ctd\u003e\n `ValueError`\n \u003c/td\u003e\n \u003ctd\u003e\n If `optimization_parameters` is not one of the required types.\n \u003c/td\u003e\n \u003c/tr\u003e\n \u003c/table\u003e\n\n\n\n\n\n \u003c!-- Tabular view --\u003e\n \u003ctable class=\"responsive fixed orange\"\u003e\n \u003ccolgroup\u003e\u003ccol width=\"214px\"\u003e\u003ccol\u003e\u003c/colgroup\u003e\n \u003ctr\u003e\u003cth colspan=\"2\"\u003e\u003ch2 class=\"add-link\"\u003eAttributes\u003c/h2\u003e\u003c/th\u003e\u003c/tr\u003e\n\n \u003ctr\u003e\n \u003ctd\u003e\n `feature_columns`\n \u003c/td\u003e\n \u003ctd\u003e\n\n \u003c/td\u003e\n \u003c/tr\u003e\u003ctr\u003e\n \u003ctd\u003e\n `optimization_parameters`\n \u003c/td\u003e\n \u003ctd\u003e\n\n \u003c/td\u003e\n \u003c/tr\u003e\u003ctr\u003e\n \u003ctd\u003e\n `clipping_limit`\n \u003c/td\u003e\n \u003ctd\u003e\n\n \u003c/td\u003e\n \u003c/tr\u003e\u003ctr\u003e\n \u003ctd\u003e\n `pipeline_execution_with_tensor_core`\n \u003c/td\u003e\n \u003ctd\u003e\n\n \u003c/td\u003e\n \u003c/tr\u003e\u003ctr\u003e\n \u003ctd\u003e\n `experimental_gradient_multiplier_fn`\n \u003c/td\u003e\n \u003ctd\u003e\n\n \u003c/td\u003e\n \u003c/tr\u003e\u003ctr\u003e\n \u003ctd\u003e\n `feature_to_config_dict`\n \u003c/td\u003e\n \u003ctd\u003e\n\n \u003c/td\u003e\n \u003c/tr\u003e\u003ctr\u003e\n \u003ctd\u003e\n `table_to_config_dict`\n \u003c/td\u003e\n \u003ctd\u003e\n\n \u003c/td\u003e\n \u003c/tr\u003e\u003ctr\u003e\n \u003ctd\u003e\n `partition_strategy`\n \u003c/td\u003e\n \u003ctd\u003e\n\n \u003c/td\u003e\n \u003c/tr\u003e\n \u003c/table\u003e"]]