Load Adadelta parameters with debug support.
tf.raw_ops.LoadTPUEmbeddingAdadeltaParametersGradAccumDebug(
    parameters, accumulators, updates, gradient_accumulators, num_shards, shard_id,
    table_id=-1, table_name='', config='', name=None
)
An op that loads optimization parameters into HBM for embedding. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to install parameters that are loaded from a checkpoint before a training loop is executed.
| Args | |
|---|---|
| parameters | A Tensorof typefloat32.
Value of parameters used in the Adadelta optimization algorithm. | 
| accumulators | A Tensorof typefloat32.
Value of accumulators used in the Adadelta optimization algorithm. | 
| updates | A Tensorof typefloat32.
Value of updates used in the Adadelta optimization algorithm. | 
| gradient_accumulators | A Tensorof typefloat32.
Value of gradient_accumulators used in the Adadelta optimization algorithm. | 
| num_shards | An int. | 
| shard_id | An int. | 
| table_id | An optional int. Defaults to-1. | 
| table_name | An optional string. Defaults to"". | 
| config | An optional string. Defaults to"". | 
| name | A name for the operation (optional). | 
| Returns | |
|---|---|
| The created Operation. |