Update '*var' according to the Adam algorithm.

$$\text{lr}_t := \mathrm{learning_rate} * \sqrt{1 - \beta_2^t} / (1 - \beta_1^t)$$
$$m_t := \beta_1 * m_{t-1} + (1 - \beta_1) * g$$
$$v_t := \beta_2 * v_{t-1} + (1 - \beta_2) * g * g$$
$$\hat{v}_t := max{\hat{v}_{t-1}, v_t}$$
$$\text{variable} := \text{variable} - \text{lr}_t * m_t / (\sqrt{\hat{v}_t} + \epsilon)$$

### Public Methods

 static ResourceApplyAdamWithAmsgrad create(Scope scope, Operand var, Operand m, Operand v, Operand vhat, Operand beta1Power, Operand beta2Power, Operand lr, Operand beta1, Operand beta2, Operand epsilon, Operand grad, Options... options) Factory method to create a class wrapping a new ResourceApplyAdamWithAmsgrad operation. static ResourceApplyAdamWithAmsgrad.Options useLocking(Boolean useLocking)

## Public Methods

#### public static ResourceApplyAdamWithAmsgrad create(Scope scope, Operand<?> var, Operand<?> m, Operand<?> v, Operand<?> vhat, Operand<T> beta1Power, Operand<T> beta2Power, Operand<T> lr, Operand<T> beta1, Operand<T> beta2, Operand<T> epsilon, Operand<T> grad, Options... options)

##### Parameters
scope current scope Should be from a Variable(). Should be from a Variable(). Should be from a Variable(). Should be from a Variable(). Must be a scalar. Must be a scalar. Scaling factor. Must be a scalar. Momentum factor. Must be a scalar. Momentum factor. Must be a scalar. Ridge term. Must be a scalar. The gradient. carries optional attributes values

##### Parameters
useLocking If True, updating of the var, m, and v tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
[]
[]