Update '*var' according to the AddSign update.
tf.raw_ops.ApplyPowerSign(
var, m, lr, logbase, sign_decay, beta, grad, use_locking=False, name=None
)
mt <- beta1 * m{t-1} + (1 - beta1) * g update <- exp(logbase * sign_decay * sign(g) * sign(m_t)) * g variable <- variable - lr_t * update
Args | |
|---|---|
var
|
A mutable Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64.
Should be from a Variable().
|
m
|
A mutable Tensor. Must have the same type as var.
Should be from a Variable().
|
lr
|
A Tensor. Must have the same type as var.
Scaling factor. Must be a scalar.
|
logbase
|
A Tensor. Must have the same type as var. Must be a scalar.
|
sign_decay
|
A Tensor. Must have the same type as var.
Must be a scalar.
|
beta
|
A Tensor. Must have the same type as var. Must be a scalar.
|
grad
|
A Tensor. Must have the same type as var. The gradient.
|
use_locking
|
An optional bool. Defaults to False.
If True, updating of the var and m tensors is
protected by a lock; otherwise the behavior is undefined, but may exhibit less
contention.
|
name
|
A name for the operation (optional). |
Returns | |
|---|---|
A mutable Tensor. Has the same type as var.
|