Load FTRL embedding parameters with debug support.
tf.raw_ops.LoadTPUEmbeddingFTRLParametersGradAccumDebug(
parameters, accumulators, linears, gradient_accumulators, num_shards, shard_id,
table_id=-1, table_name='', config='', name=None
)
An op that loads optimization parameters into HBM for embedding. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to install parameters that are loaded from a checkpoint before a training loop is executed.
Args | |
|---|---|
parameters
|
A Tensor of type float32.
Value of parameters used in the FTRL optimization algorithm.
|
accumulators
|
A Tensor of type float32.
Value of accumulators used in the FTRL optimization algorithm.
|
linears
|
A Tensor of type float32.
Value of linears used in the FTRL optimization algorithm.
|
gradient_accumulators
|
A Tensor of type float32.
Value of gradient_accumulators used in the FTRL optimization algorithm.
|
num_shards
|
An int.
|
shard_id
|
An int.
|
table_id
|
An optional int. Defaults to -1.
|
table_name
|
An optional string. Defaults to "".
|
config
|
An optional string. Defaults to "".
|
name
|
A name for the operation (optional). |
Returns | |
|---|---|
| The created Operation. |