tf.raw_ops.TPUEmbeddingActivations
Stay organized with collections
Save and categorize content based on your preferences.
An op enabling differentiation of TPU Embeddings.
tf.raw_ops.TPUEmbeddingActivations(
embedding_variable, sliced_activations, table_id, lookup_id, name=None
)
This op simply returns its first input, which is assumed to have been sliced
from the Tensors returned by TPUEmbeddingDequeueActivations. The presence of
this op, and its first argument being a trainable Variable, enables automatic
differentiation of graphs containing embeddings via the TPU Embedding Python
libraries.
Args |
embedding_variable
|
A Tensor of type float32 .
A trainable variable, enabling optimizers to find this op.
|
sliced_activations
|
A Tensor of type float32 .
The embedding activations Tensor to return.
|
table_id
|
An int that is >= 0 .
The id of the table in the embedding layer configuration from which
these activations were computed.
|
lookup_id
|
An int that is >= 0 .
Identifier of the set of embedding indices which produced these
activations.
|
name
|
A name for the operation (optional).
|
Returns |
A Tensor of type float32 .
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2023-10-06 UTC.
[null,null,["Last updated 2023-10-06 UTC."],[],[],null,["# tf.raw_ops.TPUEmbeddingActivations\n\n\u003cbr /\u003e\n\nAn op enabling differentiation of TPU Embeddings.\n\n#### View aliases\n\n\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n[`tf.compat.v1.raw_ops.TPUEmbeddingActivations`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/TPUEmbeddingActivations)\n\n\u003cbr /\u003e\n\n tf.raw_ops.TPUEmbeddingActivations(\n embedding_variable, sliced_activations, table_id, lookup_id, name=None\n )\n\nThis op simply returns its first input, which is assumed to have been sliced\nfrom the Tensors returned by TPUEmbeddingDequeueActivations. The presence of\nthis op, and its first argument being a trainable Variable, enables automatic\ndifferentiation of graphs containing embeddings via the TPU Embedding Python\nlibraries.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|----------------------|-------------------------------------------------------------------------------------------------------------------------------|\n| `embedding_variable` | A `Tensor` of type `float32`. A trainable variable, enabling optimizers to find this op. |\n| `sliced_activations` | A `Tensor` of type `float32`. The embedding activations Tensor to return. |\n| `table_id` | An `int` that is `\u003e= 0`. The id of the table in the embedding layer configuration from which these activations were computed. |\n| `lookup_id` | An `int` that is `\u003e= 0`. Identifier of the set of embedding indices which produced these activations. |\n| `name` | A name for the operation (optional). |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| A `Tensor` of type `float32`. ||\n\n\u003cbr /\u003e"]]