DynamicEnqueueTPUEmbeddingArbitraryTensorBatch

public final class DynamicEnqueueTPUEmbeddingArbitraryTensorBatch

Eases the porting of code that uses tf.nn.embedding_lookup_sparse().

embedding_indices[i] and aggregation_weights[i] correspond to the ith feature.

The tensors at corresponding positions in the three input lists (sample_indices, embedding_indices and aggregation_weights) must have the same shape, i.e. rank 1 with dim_size() equal to the total number of lookups into the table described by the corresponding feature.

Nested Classes

class DynamicEnqueueTPUEmbeddingArbitraryTensorBatch.Options Optional attributes for DynamicEnqueueTPUEmbeddingArbitraryTensorBatch  

Public Methods

static DynamicEnqueueTPUEmbeddingArbitraryTensorBatch.Options
combiners(List<String> combiners)
static <T extends Number, U extends Number, V extends Number> DynamicEnqueueTPUEmbeddingArbitraryTensorBatch
create(Scope scope, Iterable<Operand<T>> sampleIndicesOrRowSplits, Iterable<Operand<U>> embeddingIndices, Iterable<Operand<V>> aggregationWeights, Operand<String> modeOverride, Operand<Integer> deviceOrdinal, Options... options)
Factory method to create a class wrapping a new DynamicEnqueueTPUEmbeddingArbitraryTensorBatch operation.

Inherited Methods

Public Methods

public static DynamicEnqueueTPUEmbeddingArbitraryTensorBatch.Options combiners (List<String> combiners)

Parameters
combiners A list of string scalars, one for each embedding table that specify how to normalize the embedding activations after weighted summation. Supported combiners are 'mean', 'sum', or 'sqrtn'. It is invalid to have the sum of the weights be 0 for 'mean' or the sum of the squared weights be 0 for 'sqrtn'. If combiners isn't passed, the default is to use 'sum' for all tables.

public static DynamicEnqueueTPUEmbeddingArbitraryTensorBatch create (Scope scope, Iterable<Operand<T>> sampleIndicesOrRowSplits, Iterable<Operand<U>> embeddingIndices, Iterable<Operand<V>> aggregationWeights, Operand<String> modeOverride, Operand<Integer> deviceOrdinal, Options... options)

Factory method to create a class wrapping a new DynamicEnqueueTPUEmbeddingArbitraryTensorBatch operation.

Parameters
scope current scope
sampleIndicesOrRowSplits A list of rank 2 Tensors specifying the training example to which the corresponding embedding_indices and aggregation_weights values belong. If the size of its first dimension is 0, we assume each embedding_indices belongs to a different sample. Both int32 and int64 are allowed and will be converted to int32 internally.

Or a list of rank 1 Tensors specifying the row splits for splitting embedding_indices and aggregation_weights into rows. It corresponds to ids.row_splits in embedding_lookup(), when ids is a RaggedTensor. When enqueuing N-D ragged tensor, only the last dimension is allowed to be ragged. the row splits is 1-D dense tensor. When empty, we assume a dense tensor is passed to the op Both int32 and int64 are allowed and will be converted to int32 internally.

embeddingIndices A list of rank 1 Tensors, indices into the embedding tables. Both int32 and int64 are allowed and will be converted to int32 internally.
aggregationWeights A list of rank 1 Tensors containing per training example aggregation weights. Both float32 and float64 are allowed and will be converted to float32 internally.
modeOverride A string input that overrides the mode specified in the TPUEmbeddingConfiguration. Supported values are {'unspecified', 'inference', 'training', 'backward_pass_only'}. When set to 'unspecified', the mode set in TPUEmbeddingConfiguration is used, otherwise mode_override is used.
deviceOrdinal The TPU device to use. Should be >= 0 and less than the number of TPU cores in the task on which the node is placed.
options carries optional attributes values
Returns
  • a new instance of DynamicEnqueueTPUEmbeddingArbitraryTensorBatch