संग्रह की मदद से व्यवस्थित रहें
अपनी प्राथमिकताओं के आधार पर, कॉन्टेंट को सेव करें और कैटगरी में बांटें.
टेंसरफ़्लो:: ऑप्स:: रिसोर्सएप्लाईएडमविथएम्सग्रेड
#include <training_ops.h>
एडम एल्गोरिथम के अनुसार '*var' को अपडेट करें।
सारांश
$$lr_t := {learning_rate} * {1 - beta_2^t} / (1 - beta_1^t)$$ $$m_t := beta_1 * m_{t-1} + (1 - beta_1) * g$$ $$v_t := beta_2 * v_{t-1} + (1 - beta_2) * g * g$$ $$vhat_t := max{vhat_{t-1}, v_t}$$ $$variable := variable - lr_t * m_t / ({vhat_t} + )$$
तर्क:
- स्कोप: एक स्कोप ऑब्जेक्ट
- var: एक वेरिएबल() से होना चाहिए।
- एम: एक वेरिएबल() से होना चाहिए।
- v: एक वेरिएबल() से होना चाहिए।
- क्या: एक वेरिएबल() से होना चाहिए।
- beta1_power: एक अदिश राशि होनी चाहिए.
- beta2_power: एक अदिश राशि होनी चाहिए.
- एलआर: स्केलिंग कारक। एक अदिश राशि होनी चाहिए.
- बीटा1: संवेग कारक। एक अदिश राशि होनी चाहिए.
- बीटा2: संवेग कारक। एक अदिश राशि होनी चाहिए.
- एप्सिलॉन: रिज शब्द। एक अदिश राशि होनी चाहिए.
- ग्रेड: ग्रेडिएंट.
वैकल्पिक विशेषताएँ (देखें Attrs
):
- उपयोग_लॉकिंग: यदि
True
, तो var, m, और v टेंसर का अद्यतनीकरण लॉक द्वारा संरक्षित किया जाएगा; अन्यथा व्यवहार अपरिभाषित है, लेकिन कम विवाद प्रदर्शित कर सकता है।
रिटर्न:
निर्माता और विध्वंसक |
---|
ResourceApplyAdamWithAmsgrad (const :: tensorflow::Scope & scope, :: tensorflow::Input var, :: tensorflow::Input m, :: tensorflow::Input v, :: tensorflow::Input vhat, :: tensorflow::Input beta1_power, :: tensorflow::Input beta2_power, :: tensorflow::Input lr, :: tensorflow::Input beta1, :: tensorflow::Input beta2, :: tensorflow::Input epsilon, :: tensorflow::Input grad)
|
ResourceApplyAdamWithAmsgrad (const :: tensorflow::Scope & scope, :: tensorflow::Input var, :: tensorflow::Input m, :: tensorflow::Input v, :: tensorflow::Input vhat, :: tensorflow::Input beta1_power, :: tensorflow::Input beta2_power, :: tensorflow::Input lr, :: tensorflow::Input beta1, :: tensorflow::Input beta2, :: tensorflow::Input epsilon, :: tensorflow::Input grad, const ResourceApplyAdamWithAmsgrad::Attrs & attrs) |
सार्वजनिक गुण
सार्वजनिक समारोह
रिसोर्सएप्लाईएडमविथएम्सग्रेड
ResourceApplyAdamWithAmsgrad(
const ::tensorflow::Scope & scope,
::tensorflow::Input var,
::tensorflow::Input m,
::tensorflow::Input v,
::tensorflow::Input vhat,
::tensorflow::Input beta1_power,
::tensorflow::Input beta2_power,
::tensorflow::Input lr,
::tensorflow::Input beta1,
::tensorflow::Input beta2,
::tensorflow::Input epsilon,
::tensorflow::Input grad
)
रिसोर्सएप्लाईएडमविथएम्सग्रेड
ResourceApplyAdamWithAmsgrad(
const ::tensorflow::Scope & scope,
::tensorflow::Input var,
::tensorflow::Input m,
::tensorflow::Input v,
::tensorflow::Input vhat,
::tensorflow::Input beta1_power,
::tensorflow::Input beta2_power,
::tensorflow::Input lr,
::tensorflow::Input beta1,
::tensorflow::Input beta2,
::tensorflow::Input epsilon,
::tensorflow::Input grad,
const ResourceApplyAdamWithAmsgrad::Attrs & attrs
)
ऑपरेटर::टेन्सरफ़्लो::ऑपरेशन
operator::tensorflow::Operation() const
सार्वजनिक स्थैतिक कार्य
लॉकिंग का उपयोग करें
Attrs UseLocking(
bool x
)
जब तक कुछ अलग से न बताया जाए, तब तक इस पेज की सामग्री को Creative Commons Attribution 4.0 License के तहत और कोड के नमूनों को Apache 2.0 License के तहत लाइसेंस मिला है. ज़्यादा जानकारी के लिए, Google Developers साइट नीतियां देखें. Oracle और/या इससे जुड़ी हुई कंपनियों का, Java एक रजिस्टर किया हुआ ट्रेडमार्क है.
आखिरी बार 2025-07-26 (UTC) को अपडेट किया गया.
[null,null,["आखिरी बार 2025-07-26 (UTC) को अपडेट किया गया."],[],[],null,["# tensorflow::ops::ResourceApplyAdamWithAmsgrad Class Reference\n\ntensorflow::ops::ResourceApplyAdamWithAmsgrad\n=============================================\n\n`#include \u003ctraining_ops.h\u003e`\n\nUpdate '\\*var' according to the Adam algorithm.\n\nSummary\n-------\n\n$$lr_t := {learning_rate} \\* {1 - beta_2\\^t} / (1 - beta_1\\^t)$$ $$m_t := beta_1 \\* m_{t-1} + (1 - beta_1) \\* g$$ $$v_t := beta_2 \\* v_{t-1} + (1 - beta_2) \\* g \\* g$$ $$vhat_t := max{vhat_{t-1}, v_t}$$ $$variable := variable - lr_t \\* m_t / ({vhat_t} + )$$\n\nArguments:\n\n- scope: A [Scope](/versions/r2.0/api_docs/cc/class/tensorflow/scope#classtensorflow_1_1_scope) object\n- var: Should be from a Variable().\n- m: Should be from a Variable().\n- v: Should be from a Variable().\n- vhat: Should be from a Variable().\n- beta1_power: Must be a scalar.\n- beta2_power: Must be a scalar.\n- lr: Scaling factor. Must be a scalar.\n- beta1: Momentum factor. Must be a scalar.\n- beta2: Momentum factor. Must be a scalar.\n- epsilon: Ridge term. Must be a scalar.\n- grad: The gradient.\n\n\u003cbr /\u003e\n\nOptional attributes (see [Attrs](/versions/r2.0/api_docs/cc/struct/tensorflow/ops/resource-apply-adam-with-amsgrad/attrs#structtensorflow_1_1ops_1_1_resource_apply_adam_with_amsgrad_1_1_attrs)):\n\n- use_locking: If `True`, updating of the var, m, and v tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.\n\n\u003cbr /\u003e\n\nReturns:\n\n- the created [Operation](/versions/r2.0/api_docs/cc/class/tensorflow/operation#classtensorflow_1_1_operation)\n\n\u003cbr /\u003e\n\n| ### Constructors and Destructors ||\n|---|---|\n| [ResourceApplyAdamWithAmsgrad](#classtensorflow_1_1ops_1_1_resource_apply_adam_with_amsgrad_1a76c2fd9a089e3d00f1c5efc4948dcfa8)`(const ::`[tensorflow::Scope](/versions/r2.0/api_docs/cc/class/tensorflow/scope#classtensorflow_1_1_scope)` & scope, ::`[tensorflow::Input](/versions/r2.0/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` var, ::`[tensorflow::Input](/versions/r2.0/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` m, ::`[tensorflow::Input](/versions/r2.0/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` v, ::`[tensorflow::Input](/versions/r2.0/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` vhat, ::`[tensorflow::Input](/versions/r2.0/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` beta1_power, ::`[tensorflow::Input](/versions/r2.0/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` beta2_power, ::`[tensorflow::Input](/versions/r2.0/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` lr, ::`[tensorflow::Input](/versions/r2.0/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` beta1, ::`[tensorflow::Input](/versions/r2.0/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` beta2, ::`[tensorflow::Input](/versions/r2.0/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` epsilon, ::`[tensorflow::Input](/versions/r2.0/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` grad)` ||\n| [ResourceApplyAdamWithAmsgrad](#classtensorflow_1_1ops_1_1_resource_apply_adam_with_amsgrad_1a875458300576d2e270cddcffa71e079c)`(const ::`[tensorflow::Scope](/versions/r2.0/api_docs/cc/class/tensorflow/scope#classtensorflow_1_1_scope)` & scope, ::`[tensorflow::Input](/versions/r2.0/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` var, ::`[tensorflow::Input](/versions/r2.0/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` m, ::`[tensorflow::Input](/versions/r2.0/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` v, ::`[tensorflow::Input](/versions/r2.0/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` vhat, ::`[tensorflow::Input](/versions/r2.0/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` beta1_power, ::`[tensorflow::Input](/versions/r2.0/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` beta2_power, ::`[tensorflow::Input](/versions/r2.0/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` lr, ::`[tensorflow::Input](/versions/r2.0/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` beta1, ::`[tensorflow::Input](/versions/r2.0/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` beta2, ::`[tensorflow::Input](/versions/r2.0/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` epsilon, ::`[tensorflow::Input](/versions/r2.0/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` grad, const `[ResourceApplyAdamWithAmsgrad::Attrs](/versions/r2.0/api_docs/cc/struct/tensorflow/ops/resource-apply-adam-with-amsgrad/attrs#structtensorflow_1_1ops_1_1_resource_apply_adam_with_amsgrad_1_1_attrs)` & attrs)` ||\n\n| ### Public attributes ||\n|--------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------|\n| [operation](#classtensorflow_1_1ops_1_1_resource_apply_adam_with_amsgrad_1a56a6e586ba373ea479c1ec80ebdbb5fa) | [Operation](/versions/r2.0/api_docs/cc/class/tensorflow/operation#classtensorflow_1_1_operation) |\n\n| ### Public functions ||\n|-----------------------------------------------------------------------------------------------------------------------------------------------|---------|\n| [operator::tensorflow::Operation](#classtensorflow_1_1ops_1_1_resource_apply_adam_with_amsgrad_1a8171cbe4e65ce2472e54d8cf14349a6d)`() const ` | ` ` ` ` |\n\n| ### Public static functions ||\n|-------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [UseLocking](#classtensorflow_1_1ops_1_1_resource_apply_adam_with_amsgrad_1a86eb3692613e7db3ca54090c9f22c353)`(bool x)` | [Attrs](/versions/r2.0/api_docs/cc/struct/tensorflow/ops/resource-apply-adam-with-amsgrad/attrs#structtensorflow_1_1ops_1_1_resource_apply_adam_with_amsgrad_1_1_attrs) |\n\n| ### Structs ||\n|-------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [tensorflow::ops::ResourceApplyAdamWithAmsgrad::Attrs](/versions/r2.0/api_docs/cc/struct/tensorflow/ops/resource-apply-adam-with-amsgrad/attrs) | Optional attribute setters for [ResourceApplyAdamWithAmsgrad](/versions/r2.0/api_docs/cc/class/tensorflow/ops/resource-apply-adam-with-amsgrad#classtensorflow_1_1ops_1_1_resource_apply_adam_with_amsgrad). |\n\nPublic attributes\n-----------------\n\n### operation\n\n```text\nOperation operation\n``` \n\nPublic functions\n----------------\n\n### ResourceApplyAdamWithAmsgrad\n\n```gdscript\n ResourceApplyAdamWithAmsgrad(\n const ::tensorflow::Scope & scope,\n ::tensorflow::Input var,\n ::tensorflow::Input m,\n ::tensorflow::Input v,\n ::tensorflow::Input vhat,\n ::tensorflow::Input beta1_power,\n ::tensorflow::Input beta2_power,\n ::tensorflow::Input lr,\n ::tensorflow::Input beta1,\n ::tensorflow::Input beta2,\n ::tensorflow::Input epsilon,\n ::tensorflow::Input grad\n)\n``` \n\n### ResourceApplyAdamWithAmsgrad\n\n```gdscript\n ResourceApplyAdamWithAmsgrad(\n const ::tensorflow::Scope & scope,\n ::tensorflow::Input var,\n ::tensorflow::Input m,\n ::tensorflow::Input v,\n ::tensorflow::Input vhat,\n ::tensorflow::Input beta1_power,\n ::tensorflow::Input beta2_power,\n ::tensorflow::Input lr,\n ::tensorflow::Input beta1,\n ::tensorflow::Input beta2,\n ::tensorflow::Input epsilon,\n ::tensorflow::Input grad,\n const ResourceApplyAdamWithAmsgrad::Attrs & attrs\n)\n``` \n\n### operator::tensorflow::Operation\n\n```gdscript\n operator::tensorflow::Operation() const \n``` \n\nPublic static functions\n-----------------------\n\n### UseLocking\n\n```text\nAttrs UseLocking(\n bool x\n)\n```"]]