tf.nn.embedding_lookup_sparse
Stay organized with collections
Save and categorize content based on your preferences.
Computes embeddings for the given ids and weights.
tf.nn.embedding_lookup_sparse(
params, sp_ids, sp_weights, partition_strategy='mod', name=None, combiner=None,
max_norm=None
)
This op assumes that there is at least one id for each row in the dense tensor
represented by sp_ids (i.e. there are no rows with empty features), and that
all the indices of sp_ids are in canonical row-major order.
It also assumes that all id values lie in the range [0, p0), where p0
is the sum of the size of params along dimension 0.
Args |
params
|
A single tensor representing the complete embedding tensor, or a
list of P tensors all of same shape except for the first dimension,
representing sharded embedding tensors. Alternatively, a
PartitionedVariable , created by partitioning along dimension 0. Each
element must be appropriately sized for the given partition_strategy .
|
sp_ids
|
N x M SparseTensor of int64 ids where N is typically batch size
and M is arbitrary.
|
sp_weights
|
either a SparseTensor of float / double weights, or None to
indicate all weights should be taken to be 1. If specified, sp_weights
must have exactly the same shape and indices as sp_ids .
|
partition_strategy
|
A string specifying the partitioning strategy, relevant
if len(params) > 1 . Currently "div" and "mod" are supported. Default
is "mod" . See tf.nn.embedding_lookup for more details.
|
name
|
Optional name for the op.
|
combiner
|
A string specifying the reduction op. Currently "mean", "sqrtn"
and "sum" are supported. "sum" computes the weighted sum of the embedding
results for each row. "mean" is the weighted sum divided by the total
weight. "sqrtn" is the weighted sum divided by the square root of the sum
of the squares of the weights.
|
max_norm
|
If not None , each embedding is clipped if its l2-norm is larger
than this value, before combining.
|
Returns |
A dense tensor representing the combined embeddings for the
sparse ids. For each row in the dense tensor represented by sp_ids , the op
looks up the embeddings for all ids in that row, multiplies them by the
corresponding weight, and combines these embeddings as specified.
In other words, if
shape(combined params) = [p0, p1, ..., pm]
and
shape(sp_ids) = shape(sp_weights) = [d0, d1, ..., dn]
then
shape(output) = [d0, d1, ..., dn-1, p1, ..., pm] .
For instance, if params is a 10x20 matrix, and sp_ids / sp_weights are
[0, 0]: id 1, weight 2.0
[0, 1]: id 3, weight 0.5
[1, 0]: id 0, weight 1.0
[2, 3]: id 1, weight 3.0
with combiner ="mean", then the output will be a 3x20 matrix where
output[0, :] = (params[1, :] * 2.0 + params[3, :] * 0.5) / (2.0 + 0.5)
output[1, :] = (params[0, :] * 1.0) / 1.0
output[2, :] = (params[1, :] * 3.0) / 3.0
|
Raises |
TypeError
|
If sp_ids is not a SparseTensor , or if sp_weights is
neither None nor SparseTensor .
|
ValueError
|
If combiner is not one of {"mean", "sqrtn", "sum"}.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2020-10-01 UTC.
[null,null,["Last updated 2020-10-01 UTC."],[],[],null,["# tf.nn.embedding_lookup_sparse\n\n\u003cbr /\u003e\n\n|------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------|\n| [TensorFlow 2 version](/api_docs/python/tf/nn/embedding_lookup_sparse) | [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/python/ops/embedding_ops.py#L367-L538) |\n\nComputes embeddings for the given ids and weights.\n\n#### View aliases\n\n\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n[`tf.compat.v1.nn.embedding_lookup_sparse`](/api_docs/python/tf/compat/v1/nn/embedding_lookup_sparse)\n\n\u003cbr /\u003e\n\n tf.nn.embedding_lookup_sparse(\n params, sp_ids, sp_weights, partition_strategy='mod', name=None, combiner=None,\n max_norm=None\n )\n\nThis op assumes that there is at least one id for each row in the dense tensor\nrepresented by sp_ids (i.e. there are no rows with empty features), and that\nall the indices of sp_ids are in canonical row-major order.\n\nIt also assumes that all id values lie in the range \\[0, p0), where p0\nis the sum of the size of params along dimension 0.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|----------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `params` | A single tensor representing the complete embedding tensor, or a list of P tensors all of same shape except for the first dimension, representing sharded embedding tensors. Alternatively, a `PartitionedVariable`, created by partitioning along dimension 0. Each element must be appropriately sized for the given `partition_strategy`. |\n| `sp_ids` | N x M `SparseTensor` of int64 ids where N is typically batch size and M is arbitrary. |\n| `sp_weights` | either a `SparseTensor` of float / double weights, or `None` to indicate all weights should be taken to be 1. If specified, `sp_weights` must have exactly the same shape and indices as `sp_ids`. |\n| `partition_strategy` | A string specifying the partitioning strategy, relevant if `len(params) \u003e 1`. Currently `\"div\"` and `\"mod\"` are supported. Default is `\"mod\"`. See `tf.nn.embedding_lookup` for more details. |\n| `name` | Optional name for the op. |\n| `combiner` | A string specifying the reduction op. Currently \"mean\", \"sqrtn\" and \"sum\" are supported. \"sum\" computes the weighted sum of the embedding results for each row. \"mean\" is the weighted sum divided by the total weight. \"sqrtn\" is the weighted sum divided by the square root of the sum of the squares of the weights. |\n| `max_norm` | If not `None`, each embedding is clipped if its l2-norm is larger than this value, before combining. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| A dense tensor representing the combined embeddings for the sparse ids. For each row in the dense tensor represented by `sp_ids`, the op looks up the embeddings for all ids in that row, multiplies them by the corresponding weight, and combines these embeddings as specified. \u003cbr /\u003e In other words, if `shape(combined params) = [p0, p1, ..., pm]` and `shape(sp_ids) = shape(sp_weights) = [d0, d1, ..., dn]` then `shape(output) = [d0, d1, ..., dn-1, p1, ..., pm]`. For instance, if params is a 10x20 matrix, and sp_ids / sp_weights are [0, 0]: id 1, weight 2.0 [0, 1]: id 3, weight 0.5 [1, 0]: id 0, weight 1.0 [2, 3]: id 1, weight 3.0 with `combiner`=\"mean\", then the output will be a 3x20 matrix where output[0, :] = (params[1, :] * 2.0 + params[3, :] * 0.5) / (2.0 + 0.5) output[1, :] = (params[0, :] * 1.0) / 1.0 output[2, :] = (params[1, :] * 3.0) / 3.0 \u003cbr /\u003e ||\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|--------------|-----------------------------------------------------------------------------------------------|\n| `TypeError` | If `sp_ids` is not a `SparseTensor`, or if `sp_weights` is neither `None` nor `SparseTensor`. |\n| `ValueError` | If `combiner` is not one of {\"mean\", \"sqrtn\", \"sum\"}. |\n\n\u003cbr /\u003e"]]