Stay organized with collections
Save and categorize content based on your preferences.
tensorflow::ops::ResourceApplyAdamWithAmsgrad
#include <training_ops.h>
Update '*var' according to the Adam algorithm.
Summary
$$\text{lr}_t := \mathrm{learning_rate} * \sqrt{1 - \beta_2^t} / (1 - \beta_1^t)$$ $$m_t := \beta_1 * m_{t-1} + (1 - \beta_1) * g$$ $$v_t := \beta_2 * v_{t-1} + (1 - \beta_2) * g * g$$ $$\hat{v}_t := max{\hat{v}_{t-1}, v_t}$$ $$\text{variable} := \text{variable} - \text{lr}_t * m_t / (\sqrt{\hat{v}_t} + \epsilon)$$
Args:
- scope: A Scope object
- var: Should be from a Variable().
- m: Should be from a Variable().
- v: Should be from a Variable().
- vhat: Should be from a Variable().
- beta1_power: Must be a scalar.
- beta2_power: Must be a scalar.
- lr: Scaling factor. Must be a scalar.
- beta1: Momentum factor. Must be a scalar.
- beta2: Momentum factor. Must be a scalar.
- epsilon: Ridge term. Must be a scalar.
- grad: The gradient.
Optional attributes (see Attrs
):
- use_locking: If
True
, updating of the var, m, and v tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
Returns:
Constructors and Destructors
|
ResourceApplyAdamWithAmsgrad(const ::tensorflow::Scope & scope, ::tensorflow::Input var, ::tensorflow::Input m, ::tensorflow::Input v, ::tensorflow::Input vhat, ::tensorflow::Input beta1_power, ::tensorflow::Input beta2_power, ::tensorflow::Input lr, ::tensorflow::Input beta1, ::tensorflow::Input beta2, ::tensorflow::Input epsilon, ::tensorflow::Input grad)
|
ResourceApplyAdamWithAmsgrad(const ::tensorflow::Scope & scope, ::tensorflow::Input var, ::tensorflow::Input m, ::tensorflow::Input v, ::tensorflow::Input vhat, ::tensorflow::Input beta1_power, ::tensorflow::Input beta2_power, ::tensorflow::Input lr, ::tensorflow::Input beta1, ::tensorflow::Input beta2, ::tensorflow::Input epsilon, ::tensorflow::Input grad, const ResourceApplyAdamWithAmsgrad::Attrs & attrs)
|
Public attributes
Public functions
ResourceApplyAdamWithAmsgrad
ResourceApplyAdamWithAmsgrad(
const ::tensorflow::Scope & scope,
::tensorflow::Input var,
::tensorflow::Input m,
::tensorflow::Input v,
::tensorflow::Input vhat,
::tensorflow::Input beta1_power,
::tensorflow::Input beta2_power,
::tensorflow::Input lr,
::tensorflow::Input beta1,
::tensorflow::Input beta2,
::tensorflow::Input epsilon,
::tensorflow::Input grad
)
ResourceApplyAdamWithAmsgrad
ResourceApplyAdamWithAmsgrad(
const ::tensorflow::Scope & scope,
::tensorflow::Input var,
::tensorflow::Input m,
::tensorflow::Input v,
::tensorflow::Input vhat,
::tensorflow::Input beta1_power,
::tensorflow::Input beta2_power,
::tensorflow::Input lr,
::tensorflow::Input beta1,
::tensorflow::Input beta2,
::tensorflow::Input epsilon,
::tensorflow::Input grad,
const ResourceApplyAdamWithAmsgrad::Attrs & attrs
)
operator::tensorflow::Operation
operator::tensorflow::Operation() const
Public static functions
UseLocking
Attrs UseLocking(
bool x
)
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2023-10-06 UTC.
[null,null,["Last updated 2023-10-06 UTC."],[],[],null,["# tensorflow::ops::ResourceApplyAdamWithAmsgrad Class Reference\n\ntensorflow::ops::ResourceApplyAdamWithAmsgrad\n=============================================\n\n`#include \u003ctraining_ops.h\u003e`\n\nUpdate '\\*var' according to the Adam algorithm.\n\nSummary\n-------\n\n$$\\\\text{lr}_t := \\\\mathrm{learning_rate} \\* \\\\sqrt{1 - \\\\beta_2\\^t} / (1 - \\\\beta_1\\^t)$$ $$m_t := \\\\beta_1 \\* m_{t-1} + (1 - \\\\beta_1) \\* g$$ $$v_t := \\\\beta_2 \\* v_{t-1} + (1 - \\\\beta_2) \\* g \\* g$$ $$\\\\hat{v}_t := max{\\\\hat{v}_{t-1}, v_t}$$ $$\\\\text{variable} := \\\\text{variable} - \\\\text{lr}_t \\* m_t / (\\\\sqrt{\\\\hat{v}_t} + \\\\epsilon)$$\n\nArgs:\n\n- scope: A [Scope](/versions/r2.14/api_docs/cc/class/tensorflow/scope#classtensorflow_1_1_scope) object\n- var: Should be from a Variable().\n- m: Should be from a Variable().\n- v: Should be from a Variable().\n- vhat: Should be from a Variable().\n- beta1_power: Must be a scalar.\n- beta2_power: Must be a scalar.\n- lr: Scaling factor. Must be a scalar.\n- beta1: Momentum factor. Must be a scalar.\n- beta2: Momentum factor. Must be a scalar.\n- epsilon: Ridge term. Must be a scalar.\n- grad: The gradient.\n\n\u003cbr /\u003e\n\nOptional attributes (see [Attrs](/versions/r2.14/api_docs/cc/struct/tensorflow/ops/resource-apply-adam-with-amsgrad/attrs#structtensorflow_1_1ops_1_1_resource_apply_adam_with_amsgrad_1_1_attrs)):\n\n- use_locking: If `True`, updating of the var, m, and v tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.\n\n\u003cbr /\u003e\n\nReturns:\n\n- the created [Operation](/versions/r2.14/api_docs/cc/class/tensorflow/operation#classtensorflow_1_1_operation)\n\n\u003cbr /\u003e\n\n| ### Constructors and Destructors ||\n|---|---|\n| [ResourceApplyAdamWithAmsgrad](#classtensorflow_1_1ops_1_1_resource_apply_adam_with_amsgrad_1a76c2fd9a089e3d00f1c5efc4948dcfa8)`(const ::`[tensorflow::Scope](/versions/r2.14/api_docs/cc/class/tensorflow/scope#classtensorflow_1_1_scope)` & scope, ::`[tensorflow::Input](/versions/r2.14/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` var, ::`[tensorflow::Input](/versions/r2.14/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` m, ::`[tensorflow::Input](/versions/r2.14/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` v, ::`[tensorflow::Input](/versions/r2.14/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` vhat, ::`[tensorflow::Input](/versions/r2.14/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` beta1_power, ::`[tensorflow::Input](/versions/r2.14/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` beta2_power, ::`[tensorflow::Input](/versions/r2.14/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` lr, ::`[tensorflow::Input](/versions/r2.14/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` beta1, ::`[tensorflow::Input](/versions/r2.14/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` beta2, ::`[tensorflow::Input](/versions/r2.14/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` epsilon, ::`[tensorflow::Input](/versions/r2.14/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` grad)` ||\n| [ResourceApplyAdamWithAmsgrad](#classtensorflow_1_1ops_1_1_resource_apply_adam_with_amsgrad_1a875458300576d2e270cddcffa71e079c)`(const ::`[tensorflow::Scope](/versions/r2.14/api_docs/cc/class/tensorflow/scope#classtensorflow_1_1_scope)` & scope, ::`[tensorflow::Input](/versions/r2.14/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` var, ::`[tensorflow::Input](/versions/r2.14/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` m, ::`[tensorflow::Input](/versions/r2.14/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` v, ::`[tensorflow::Input](/versions/r2.14/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` vhat, ::`[tensorflow::Input](/versions/r2.14/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` beta1_power, ::`[tensorflow::Input](/versions/r2.14/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` beta2_power, ::`[tensorflow::Input](/versions/r2.14/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` lr, ::`[tensorflow::Input](/versions/r2.14/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` beta1, ::`[tensorflow::Input](/versions/r2.14/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` beta2, ::`[tensorflow::Input](/versions/r2.14/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` epsilon, ::`[tensorflow::Input](/versions/r2.14/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` grad, const `[ResourceApplyAdamWithAmsgrad::Attrs](/versions/r2.14/api_docs/cc/struct/tensorflow/ops/resource-apply-adam-with-amsgrad/attrs#structtensorflow_1_1ops_1_1_resource_apply_adam_with_amsgrad_1_1_attrs)` & attrs)` ||\n\n| ### Public attributes ||\n|--------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------|\n| [operation](#classtensorflow_1_1ops_1_1_resource_apply_adam_with_amsgrad_1a56a6e586ba373ea479c1ec80ebdbb5fa) | [Operation](/versions/r2.14/api_docs/cc/class/tensorflow/operation#classtensorflow_1_1_operation) |\n\n| ### Public functions ||\n|-----------------------------------------------------------------------------------------------------------------------------------------------|---|\n| [operator::tensorflow::Operation](#classtensorflow_1_1ops_1_1_resource_apply_adam_with_amsgrad_1a8171cbe4e65ce2472e54d8cf14349a6d)`() const ` | |\n\n| ### Public static functions ||\n|-------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [UseLocking](#classtensorflow_1_1ops_1_1_resource_apply_adam_with_amsgrad_1a86eb3692613e7db3ca54090c9f22c353)`(bool x)` | [Attrs](/versions/r2.14/api_docs/cc/struct/tensorflow/ops/resource-apply-adam-with-amsgrad/attrs#structtensorflow_1_1ops_1_1_resource_apply_adam_with_amsgrad_1_1_attrs) |\n\n| ### Structs ||\n|--------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [tensorflow::ops::ResourceApplyAdamWithAmsgrad::Attrs](/versions/r2.14/api_docs/cc/struct/tensorflow/ops/resource-apply-adam-with-amsgrad/attrs) | Optional attribute setters for [ResourceApplyAdamWithAmsgrad](/versions/r2.14/api_docs/cc/class/tensorflow/ops/resource-apply-adam-with-amsgrad#classtensorflow_1_1ops_1_1_resource_apply_adam_with_amsgrad). |\n\nPublic attributes\n-----------------\n\n### operation\n\n```text\nOperation operation\n``` \n\nPublic functions\n----------------\n\n### ResourceApplyAdamWithAmsgrad\n\n```gdscript\n ResourceApplyAdamWithAmsgrad(\n const ::tensorflow::Scope & scope,\n ::tensorflow::Input var,\n ::tensorflow::Input m,\n ::tensorflow::Input v,\n ::tensorflow::Input vhat,\n ::tensorflow::Input beta1_power,\n ::tensorflow::Input beta2_power,\n ::tensorflow::Input lr,\n ::tensorflow::Input beta1,\n ::tensorflow::Input beta2,\n ::tensorflow::Input epsilon,\n ::tensorflow::Input grad\n)\n``` \n\n### ResourceApplyAdamWithAmsgrad\n\n```gdscript\n ResourceApplyAdamWithAmsgrad(\n const ::tensorflow::Scope & scope,\n ::tensorflow::Input var,\n ::tensorflow::Input m,\n ::tensorflow::Input v,\n ::tensorflow::Input vhat,\n ::tensorflow::Input beta1_power,\n ::tensorflow::Input beta2_power,\n ::tensorflow::Input lr,\n ::tensorflow::Input beta1,\n ::tensorflow::Input beta2,\n ::tensorflow::Input epsilon,\n ::tensorflow::Input grad,\n const ResourceApplyAdamWithAmsgrad::Attrs & attrs\n)\n``` \n\n### operator::tensorflow::Operation\n\n```gdscript\n operator::tensorflow::Operation() const \n``` \n\nPublic static functions\n-----------------------\n\n### UseLocking\n\n```text\nAttrs UseLocking(\n bool x\n)\n```"]]