tfrs.experimental.models.Ranking
Stay organized with collections
Save and categorize content based on your preferences.
A configurable ranking model.
Inherits From: Model
tfrs.experimental.models.Ranking(
embedding_layer: tf.keras.layers.Layer,
bottom_stack: Optional[tf.keras.layers.Layer] = None,
feature_interaction: Optional[tf.keras.layers.Layer] = None,
top_stack: Optional[tf.keras.layers.Layer] = None,
task: Optional[tfrs.tasks.Task
] = None
) -> None
This class represents a sensible and reasonably flexible configuration for a
ranking model that can be used for tasks such as CTR prediction.
It can be customized as needed, and its constituent blocks can be changed by
passing user-defined alternatives.
For example:
Pass
feature_interaction = tfrs.layers.feature_interaction.DotInteraction()
to train a DLRM model, or pass
feature_interaction = tf.keras.Sequential([
tf.keras.layers.Concatenate(),
tfrs.layers.feature_interaction.Cross()
])
to train a DCN model.
Pass task = tfrs.tasks.Ranking(loss=tf.keras.losses.BinaryCrossentropy())
to train a CTR prediction model, and
tfrs.tasks.Ranking(loss=tf.keras.losses.MeanSquaredError())
to train
a rating prediction model.
Changing these should cover a broad range of models, but this class is not
intended to cover all possible use cases. For full flexibility, inherit
from tfrs.models.Model
and provide your own implementations of
the compute_loss
and call
methods.
Args |
embedding_layer
|
The embedding layer is applied to categorical features.
It expects a string-to-tensor (or SparseTensor/RaggedTensor) dict as
an input, and outputs a dictionary of string-to-tensor of feature_name,
embedded_value pairs.
{feature_name_i: tensor_i} -> {feature_name_i: emb(tensor_i)}.
|
bottom_stack
|
The bottom_stack layer is applied to dense features before
feature interaction. If None, an MLP with layer sizes [256, 64, 16] is
used. For DLRM model, the output of bottom_stack should be of shape
(batch_size, embedding dimension).
|
feature_interaction
|
Feature interaction layer is applied to the
bottom_stack output and sparse feature embeddings. If it is None,
DotInteraction layer is used.
|
top_stack
|
The top_stack layer is applied to the feature_interaction
output. The output of top_stack should be in the range [0, 1]. If it is
None, MLP with layer sizes [512, 256, 1] is used.
|
task
|
The task which the model should optimize for. Defaults to a
tfrs.tasks.Ranking task with a binary cross-entropy loss, suitable
for tasks like click prediction.
|
Attributes |
dense_trainable_variables
|
Returns all trainable variables that are not embeddings.
|
embedding_trainable_variables
|
Returns trainable variables from embedding tables.
When training a recommendation model with embedding tables, sometimes it's
preferable to use separate optimizers/learning rates for embedding
variables and dense variables.
tfrs.experimental.optimizers.CompositeOptimizer can be used to apply
different optimizers to embedding variables and the remaining variables.
|
Methods
call
View source
call(
inputs: Dict[str, tf.Tensor]
) -> tf.Tensor
Executes forward and backward pass, returns loss.
Args |
inputs
|
Model function inputs (features and labels).
|
Returns |
loss
|
Scalar tensor.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-04-26 UTC.
[null,null,["Last updated 2024-04-26 UTC."],[],[],null,["# tfrs.experimental.models.Ranking\n\n\u003cbr /\u003e\n\n|-------------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/recommenders/blob/v0.7.3/tensorflow_recommenders/experimental/models/ranking.py#L27-L240) |\n\nA configurable ranking model.\n\nInherits From: [`Model`](../../../tfrs/models/Model) \n\n tfrs.experimental.models.Ranking(\n embedding_layer: tf.keras.layers.Layer,\n bottom_stack: Optional[tf.keras.layers.Layer] = None,\n feature_interaction: Optional[tf.keras.layers.Layer] = None,\n top_stack: Optional[tf.keras.layers.Layer] = None,\n task: Optional[../../../tfrs/tasks/Task] = None\n ) -\u003e None\n\nThis class represents a sensible and reasonably flexible configuration for a\nranking model that can be used for tasks such as CTR prediction.\n\nIt can be customized as needed, and its constituent blocks can be changed by\npassing user-defined alternatives.\n\n#### For example:\n\n- Pass\n `feature_interaction = tfrs.layers.feature_interaction.DotInteraction()`\n to train a DLRM model, or pass\n\n feature_interaction = tf.keras.Sequential([\n tf.keras.layers.Concatenate(),\n tfrs.layers.feature_interaction.Cross()\n ])\n\n to train a DCN model.\n- Pass `task = tfrs.tasks.Ranking(loss=tf.keras.losses.BinaryCrossentropy())`\n to train a CTR prediction model, and\n `tfrs.tasks.Ranking(loss=tf.keras.losses.MeanSquaredError())` to train\n a rating prediction model.\n\nChanging these should cover a broad range of models, but this class is not\nintended to cover all possible use cases. For full flexibility, inherit\nfrom [`tfrs.models.Model`](../../../tfrs/models/Model) and provide your own implementations of\nthe `compute_loss` and `call` methods.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-----------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `embedding_layer` | The embedding layer is applied to categorical features. It expects a string-to-tensor (or SparseTensor/RaggedTensor) dict as an input, and outputs a dictionary of string-to-tensor of feature_name, embedded_value pairs. {feature_name_i: tensor_i} -\\\u003e {feature_name_i: emb(tensor_i)}. |\n| `bottom_stack` | The `bottom_stack` layer is applied to dense features before feature interaction. If None, an MLP with layer sizes \\[256, 64, 16\\] is used. For DLRM model, the output of bottom_stack should be of shape (batch_size, embedding dimension). |\n| `feature_interaction` | Feature interaction layer is applied to the `bottom_stack` output and sparse feature embeddings. If it is None, DotInteraction layer is used. |\n| `top_stack` | The `top_stack` layer is applied to the `feature_interaction` output. The output of top_stack should be in the range \\[0, 1\\]. If it is None, MLP with layer sizes \\[512, 256, 1\\] is used. |\n| `task` | The task which the model should optimize for. Defaults to a [`tfrs.tasks.Ranking`](../../../tfrs/tasks/Ranking) task with a binary cross-entropy loss, suitable for tasks like click prediction. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Attributes ---------- ||\n|---------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `dense_trainable_variables` | Returns all trainable variables that are not embeddings. |\n| `embedding_trainable_variables` | Returns trainable variables from embedding tables. \u003cbr /\u003e When training a recommendation model with embedding tables, sometimes it's preferable to use separate optimizers/learning rates for embedding variables and dense variables. [`tfrs.experimental.optimizers.CompositeOptimizer`](../../../tfrs/experimental/optimizers/CompositeOptimizer) can be used to apply different optimizers to embedding variables and the remaining variables. |\n\n\u003cbr /\u003e\n\nMethods\n-------\n\n### `call`\n\n[View source](https://github.com/tensorflow/recommenders/blob/v0.7.3/tensorflow_recommenders/experimental/models/ranking.py#L190-L219) \n\n call(\n inputs: Dict[str, tf.Tensor]\n ) -\u003e tf.Tensor\n\nExecutes forward and backward pass, returns loss.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ||\n|----------|----------------------------------------------|\n| `inputs` | Model function inputs (features and labels). |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|--------|----------------|\n| `loss` | Scalar tensor. |\n\n\u003cbr /\u003e"]]