tfrs.experimental.layers.embedding.PartialTPUEmbedding
Stay organized with collections
Save and categorize content based on your preferences.
Partial TPU Embedding layer.
tfrs.experimental.layers.embedding.PartialTPUEmbedding(
feature_config,
optimizer: tf.keras.optimizers.Optimizer,
pipeline_execution_with_tensor_core: bool = False,
batch_size: Optional[int] = None,
size_threshold: Optional[int] = 10000
) -> None
This layer is composed of tfrs.layers.embedding.TPUEmbedding
and
tf.keras.layers.Embedding
embedding layers. When training on TPUs, it is
preferable to use TPU Embedding layers for large tables (as they are sharded
accross TPU cores) and Keras embedding layer for small tables.
For tables with vocab sizes less than size_threshold
a Keras embedding
layer will be used, above that threshold a TPU embedding layer will be used.
This layer will be applied on a dictionary of feature_name, categorical_tensor
pairs and return a dictionary of string-to-tensor of feature_name,
embedded_value pairs.
Args |
feature_config
|
A nested structure of
tf.tpu.experimental.embedding.FeatureConfig configs.
|
optimizer
|
An optimizer used for TPU embeddings.
|
pipeline_execution_with_tensor_core
|
If True, the TPU embedding
computations will overlap with the TensorCore computations (and hence
will be one step old with potential correctness drawbacks). Set to True
for improved performance.
|
batch_size
|
If set, this will be used as the global batch size and
override the autodetection of the batch size from the layer's input.
This is necesarry if all inputs to the layer's call are SparseTensors.
|
size_threshold
|
A threshold for table sizes below which a Keras embedding
layer is used, and above which a TPU embedding layer is used.
Set size_threshold=0 to use TPU embedding for all tables and
size_threshold=None to use only Keras embeddings.
|
Attributes |
keras_embedding_layers
|
Returns a dictionary mapping feature names to Keras embedding layers.
|
tpu_embedding
|
Returns TPUEmbedding or None if only Keras embeddings are used.
|
Methods
call
View source
call(
inputs: Dict[str, Tensor]
) -> Dict[str, tf.Tensor]
Computes the output of the embedding layer.
It expects a string-to-tensor (or SparseTensor/RaggedTensor) dict as input,
and outputs a dictionary of string-to-tensor of feature_name, embedded_value
pairs. Note that SparseTensor/RaggedTensor are only supported for
TPUEmbedding and are not supported for Keras embeddings.
Args |
inputs
|
A string-to-tensor (or SparseTensor/RaggedTensor) dictionary.
|
Returns |
output
|
A dictionary of string-to-tensor of feature_name, embedded_value
pairs.
|
Raises |
ValueError if no tf.Tensor is passed to a Keras embedding layer.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-04-26 UTC.
[null,null,["Last updated 2024-04-26 UTC."],[],[],null,["# tfrs.experimental.layers.embedding.PartialTPUEmbedding\n\n\u003cbr /\u003e\n\n|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/recommenders/blob/v0.7.3/tensorflow_recommenders/experimental/layers/embedding/partial_tpu_embedding.py#L26-L137) |\n\nPartial TPU Embedding layer. \n\n tfrs.experimental.layers.embedding.PartialTPUEmbedding(\n feature_config,\n optimizer: tf.keras.optimizers.Optimizer,\n pipeline_execution_with_tensor_core: bool = False,\n batch_size: Optional[int] = None,\n size_threshold: Optional[int] = 10000\n ) -\u003e None\n\nThis layer is composed of [`tfrs.layers.embedding.TPUEmbedding`](../../../../tfrs/layers/embedding/TPUEmbedding) and\n[`tf.keras.layers.Embedding`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Embedding) embedding layers. When training on TPUs, it is\npreferable to use TPU Embedding layers for large tables (as they are sharded\naccross TPU cores) and Keras embedding layer for small tables.\nFor tables with vocab sizes less than `size_threshold` a Keras embedding\nlayer will be used, above that threshold a TPU embedding layer will be used.\n\nThis layer will be applied on a dictionary of feature_name, categorical_tensor\npairs and return a dictionary of string-to-tensor of feature_name,\nembedded_value pairs.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|---------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `feature_config` | A nested structure of [`tf.tpu.experimental.embedding.FeatureConfig`](https://www.tensorflow.org/api_docs/python/tf/tpu/experimental/embedding/FeatureConfig) configs. |\n| `optimizer` | An optimizer used for TPU embeddings. |\n| `pipeline_execution_with_tensor_core` | If True, the TPU embedding computations will overlap with the TensorCore computations (and hence will be one step old with potential correctness drawbacks). Set to True for improved performance. |\n| `batch_size` | If set, this will be used as the global batch size and override the autodetection of the batch size from the layer's input. This is necesarry if all inputs to the layer's call are SparseTensors. |\n| `size_threshold` | A threshold for table sizes below which a Keras embedding layer is used, and above which a TPU embedding layer is used. Set `size_threshold=0` to use TPU embedding for all tables and `size_threshold=None` to use only Keras embeddings. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Attributes ---------- ||\n|--------------------------|-----------------------------------------------------------------------|\n| `keras_embedding_layers` | Returns a dictionary mapping feature names to Keras embedding layers. |\n| `tpu_embedding` | Returns TPUEmbedding or `None` if only Keras embeddings are used. |\n\n\u003cbr /\u003e\n\nMethods\n-------\n\n### `call`\n\n[View source](https://github.com/tensorflow/recommenders/blob/v0.7.3/tensorflow_recommenders/experimental/layers/embedding/partial_tpu_embedding.py#L89-L127) \n\n call(\n inputs: Dict[str, Tensor]\n ) -\u003e Dict[str, tf.Tensor]\n\nComputes the output of the embedding layer.\n\nIt expects a string-to-tensor (or SparseTensor/RaggedTensor) dict as input,\nand outputs a dictionary of string-to-tensor of feature_name, embedded_value\npairs. Note that SparseTensor/RaggedTensor are only supported for\nTPUEmbedding and are not supported for Keras embeddings.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ||\n|----------|---------------------------------------------------------------|\n| `inputs` | A string-to-tensor (or SparseTensor/RaggedTensor) dictionary. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|----------|-------------------------------------------------------------------------|\n| `output` | A dictionary of string-to-tensor of feature_name, embedded_value pairs. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ||\n|---|---|\n| ValueError if no tf.Tensor is passed to a Keras embedding layer. ||\n\n\u003cbr /\u003e"]]