Koleksiyonlar ile düzeninizi koruyun
İçeriği tercihlerinize göre kaydedin ve kategorilere ayırın.
tensor akışı:: işlem:: ResourceApplyAdamWithAmsgrad
#include <training_ops.h>
'*var'ı Adam algoritmasına göre güncelleyin.
Özet
$$lr_t := {learning_rate} * {1 - beta_2^t} / (1 - beta_1^t)$$ $$m_t := beta_1 * m_{t-1} + (1 - beta_1) * g$$ $$v_t := beta_2 * v_{t-1} + (1 - beta_2) * g * g$$ $$vhat_t := max{vhat_{t-1}, v_t}$$ $$variable := variable - lr_t * m_t / ({vhat_t} + )$$
Argümanlar:
- kapsam: Bir Kapsam nesnesi
- var: Bir Variable()'dan olmalıdır.
- m: Bir Variable()'dan olmalıdır.
- v: Bir Variable()'dan olmalıdır.
- vhat: Bir Variable()'dan olmalıdır.
- beta1_power: Skaler olmalı.
- beta2_power: Skaler olmalı.
- lr: Ölçeklendirme faktörü. Bir skaler olmalı.
- beta1: Momentum faktörü. Bir skaler olmalı.
- beta2: Momentum faktörü. Bir skaler olmalı.
- epsilon: Ridge terimi. Bir skaler olmalı.
- grad: Gradyan.
İsteğe bağlı özellikler (bkz. Attrs
):
- use_locking:
True
ise var, m ve v tensörlerinin güncellenmesi bir kilitle korunacaktır; aksi takdirde davranış tanımsızdır ancak daha az çekişme sergileyebilir.
İade:
Yapıcılar ve Yıkıcılar |
---|
ResourceApplyAdamWithAmsgrad (const :: tensorflow::Scope & scope, :: tensorflow::Input var, :: tensorflow::Input m, :: tensorflow::Input v, :: tensorflow::Input vhat, :: tensorflow::Input beta1_power, :: tensorflow::Input beta2_power, :: tensorflow::Input lr, :: tensorflow::Input beta1, :: tensorflow::Input beta2, :: tensorflow::Input epsilon, :: tensorflow::Input grad)
|
ResourceApplyAdamWithAmsgrad (const :: tensorflow::Scope & scope, :: tensorflow::Input var, :: tensorflow::Input m, :: tensorflow::Input v, :: tensorflow::Input vhat, :: tensorflow::Input beta1_power, :: tensorflow::Input beta2_power, :: tensorflow::Input lr, :: tensorflow::Input beta1, :: tensorflow::Input beta2, :: tensorflow::Input epsilon, :: tensorflow::Input grad, const ResourceApplyAdamWithAmsgrad::Attrs & attrs) |
Genel özellikler
Kamu işlevleri
ResourceApplyAdamWithAmsgrad
ResourceApplyAdamWithAmsgrad(
const ::tensorflow::Scope & scope,
::tensorflow::Input var,
::tensorflow::Input m,
::tensorflow::Input v,
::tensorflow::Input vhat,
::tensorflow::Input beta1_power,
::tensorflow::Input beta2_power,
::tensorflow::Input lr,
::tensorflow::Input beta1,
::tensorflow::Input beta2,
::tensorflow::Input epsilon,
::tensorflow::Input grad
)
ResourceApplyAdamWithAmsgrad
ResourceApplyAdamWithAmsgrad(
const ::tensorflow::Scope & scope,
::tensorflow::Input var,
::tensorflow::Input m,
::tensorflow::Input v,
::tensorflow::Input vhat,
::tensorflow::Input beta1_power,
::tensorflow::Input beta2_power,
::tensorflow::Input lr,
::tensorflow::Input beta1,
::tensorflow::Input beta2,
::tensorflow::Input epsilon,
::tensorflow::Input grad,
const ResourceApplyAdamWithAmsgrad::Attrs & attrs
)
operatör::tensorflow::İşlem
operator::tensorflow::Operation() const
Genel statik işlevler
KullanımKilitleme
Attrs UseLocking(
bool x
)
Aksi belirtilmediği sürece bu sayfanın içeriği Creative Commons Atıf 4.0 Lisansı altında ve kod örnekleri Apache 2.0 Lisansı altında lisanslanmıştır. Ayrıntılı bilgi için Google Developers Site Politikaları'na göz atın. Java, Oracle ve/veya satış ortaklarının tescilli ticari markasıdır.
Son güncelleme tarihi: 2025-07-26 UTC.
[null,null,["Son güncelleme tarihi: 2025-07-26 UTC."],[],[],null,["# tensorflow::ops::ResourceApplyAdamWithAmsgrad Class Reference\n\ntensorflow::ops::ResourceApplyAdamWithAmsgrad\n=============================================\n\n`#include \u003ctraining_ops.h\u003e`\n\nUpdate '\\*var' according to the Adam algorithm.\n\nSummary\n-------\n\n$$lr_t := {learning_rate} \\* {1 - beta_2\\^t} / (1 - beta_1\\^t)$$ $$m_t := beta_1 \\* m_{t-1} + (1 - beta_1) \\* g$$ $$v_t := beta_2 \\* v_{t-1} + (1 - beta_2) \\* g \\* g$$ $$vhat_t := max{vhat_{t-1}, v_t}$$ $$variable := variable - lr_t \\* m_t / ({vhat_t} + )$$\n\nArguments:\n\n- scope: A [Scope](/versions/r1.15/api_docs/cc/class/tensorflow/scope#classtensorflow_1_1_scope) object\n- var: Should be from a Variable().\n- m: Should be from a Variable().\n- v: Should be from a Variable().\n- vhat: Should be from a Variable().\n- beta1_power: Must be a scalar.\n- beta2_power: Must be a scalar.\n- lr: Scaling factor. Must be a scalar.\n- beta1: Momentum factor. Must be a scalar.\n- beta2: Momentum factor. Must be a scalar.\n- epsilon: Ridge term. Must be a scalar.\n- grad: The gradient.\n\n\u003cbr /\u003e\n\nOptional attributes (see [Attrs](/versions/r1.15/api_docs/cc/struct/tensorflow/ops/resource-apply-adam-with-amsgrad/attrs#structtensorflow_1_1ops_1_1_resource_apply_adam_with_amsgrad_1_1_attrs)):\n\n- use_locking: If `True`, updating of the var, m, and v tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.\n\n\u003cbr /\u003e\n\nReturns:\n\n- the created [Operation](/versions/r1.15/api_docs/cc/class/tensorflow/operation#classtensorflow_1_1_operation)\n\n\u003cbr /\u003e\n\n| ### Constructors and Destructors ||\n|---|---|\n| [ResourceApplyAdamWithAmsgrad](#classtensorflow_1_1ops_1_1_resource_apply_adam_with_amsgrad_1a76c2fd9a089e3d00f1c5efc4948dcfa8)`(const ::`[tensorflow::Scope](/versions/r1.15/api_docs/cc/class/tensorflow/scope#classtensorflow_1_1_scope)` & scope, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` var, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` m, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` v, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` vhat, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` beta1_power, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` beta2_power, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` lr, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` beta1, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` beta2, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` epsilon, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` grad)` ||\n| [ResourceApplyAdamWithAmsgrad](#classtensorflow_1_1ops_1_1_resource_apply_adam_with_amsgrad_1a875458300576d2e270cddcffa71e079c)`(const ::`[tensorflow::Scope](/versions/r1.15/api_docs/cc/class/tensorflow/scope#classtensorflow_1_1_scope)` & scope, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` var, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` m, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` v, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` vhat, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` beta1_power, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` beta2_power, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` lr, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` beta1, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` beta2, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` epsilon, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` grad, const `[ResourceApplyAdamWithAmsgrad::Attrs](/versions/r1.15/api_docs/cc/struct/tensorflow/ops/resource-apply-adam-with-amsgrad/attrs#structtensorflow_1_1ops_1_1_resource_apply_adam_with_amsgrad_1_1_attrs)` & attrs)` ||\n\n| ### Public attributes ||\n|--------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------|\n| [operation](#classtensorflow_1_1ops_1_1_resource_apply_adam_with_amsgrad_1a56a6e586ba373ea479c1ec80ebdbb5fa) | [Operation](/versions/r1.15/api_docs/cc/class/tensorflow/operation#classtensorflow_1_1_operation) |\n\n| ### Public functions ||\n|-----------------------------------------------------------------------------------------------------------------------------------------------|---------|\n| [operator::tensorflow::Operation](#classtensorflow_1_1ops_1_1_resource_apply_adam_with_amsgrad_1a8171cbe4e65ce2472e54d8cf14349a6d)`() const ` | ` ` ` ` |\n\n| ### Public static functions ||\n|-------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [UseLocking](#classtensorflow_1_1ops_1_1_resource_apply_adam_with_amsgrad_1a86eb3692613e7db3ca54090c9f22c353)`(bool x)` | [Attrs](/versions/r1.15/api_docs/cc/struct/tensorflow/ops/resource-apply-adam-with-amsgrad/attrs#structtensorflow_1_1ops_1_1_resource_apply_adam_with_amsgrad_1_1_attrs) |\n\n| ### Structs ||\n|--------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [tensorflow::ops::ResourceApplyAdamWithAmsgrad::Attrs](/versions/r1.15/api_docs/cc/struct/tensorflow/ops/resource-apply-adam-with-amsgrad/attrs) | Optional attribute setters for [ResourceApplyAdamWithAmsgrad](/versions/r1.15/api_docs/cc/class/tensorflow/ops/resource-apply-adam-with-amsgrad#classtensorflow_1_1ops_1_1_resource_apply_adam_with_amsgrad). |\n\nPublic attributes\n-----------------\n\n### operation\n\n```text\nOperation operation\n``` \n\nPublic functions\n----------------\n\n### ResourceApplyAdamWithAmsgrad\n\n```gdscript\n ResourceApplyAdamWithAmsgrad(\n const ::tensorflow::Scope & scope,\n ::tensorflow::Input var,\n ::tensorflow::Input m,\n ::tensorflow::Input v,\n ::tensorflow::Input vhat,\n ::tensorflow::Input beta1_power,\n ::tensorflow::Input beta2_power,\n ::tensorflow::Input lr,\n ::tensorflow::Input beta1,\n ::tensorflow::Input beta2,\n ::tensorflow::Input epsilon,\n ::tensorflow::Input grad\n)\n``` \n\n### ResourceApplyAdamWithAmsgrad\n\n```gdscript\n ResourceApplyAdamWithAmsgrad(\n const ::tensorflow::Scope & scope,\n ::tensorflow::Input var,\n ::tensorflow::Input m,\n ::tensorflow::Input v,\n ::tensorflow::Input vhat,\n ::tensorflow::Input beta1_power,\n ::tensorflow::Input beta2_power,\n ::tensorflow::Input lr,\n ::tensorflow::Input beta1,\n ::tensorflow::Input beta2,\n ::tensorflow::Input epsilon,\n ::tensorflow::Input grad,\n const ResourceApplyAdamWithAmsgrad::Attrs & attrs\n)\n``` \n\n### operator::tensorflow::Operation\n\n```gdscript\n operator::tensorflow::Operation() const \n``` \n\nPublic static functions\n-----------------------\n\n### UseLocking\n\n```text\nAttrs UseLocking(\n bool x\n)\n```"]]