Mantieni tutto organizzato con le raccolte
Salva e classifica i contenuti in base alle tue preferenze.
tensoreflusso:: ops:: ResourceApplyAdamWithAmsgrad
#include <training_ops.h>
Aggiorna '*var' secondo l'algoritmo di Adam.
Riepilogo
$$lr_t := {learning_rate} * {1 - beta_2^t} / (1 - beta_1^t)$$ $$m_t := beta_1 * m_{t-1} + (1 - beta_1) * g$$ $$v_t := beta_2 * v_{t-1} + (1 - beta_2) * g * g$$ $$vhat_t := max{vhat_{t-1}, v_t}$$ $$variable := variable - lr_t * m_t / ({vhat_t} + )$$
Argomenti:
- scope: un oggetto Scope
- var: dovrebbe provenire da una variabile().
- m: dovrebbe provenire da una variabile().
- v: dovrebbe provenire da una variabile().
- vhat: dovrebbe provenire da una variabile().
- beta1_power: deve essere uno scalare.
- beta2_power: deve essere uno scalare.
- lr: fattore di scala. Deve essere uno scalare.
- beta1: fattore di slancio. Deve essere uno scalare.
- beta2: fattore di slancio. Deve essere uno scalare.
- epsilon: termine di cresta. Deve essere uno scalare.
- grad: il gradiente.
Attributi facoltativi (vedi Attrs
):
- use_locking: Se
True
, l'aggiornamento dei tensori var, m e v sarà protetto da un blocco; altrimenti il comportamento non è definito, ma può mostrare meno contesa.
Resi:
Costruttori e distruttori |
---|
ResourceApplyAdamWithAmsgrad (const :: tensorflow::Scope & scope, :: tensorflow::Input var, :: tensorflow::Input m, :: tensorflow::Input v, :: tensorflow::Input vhat, :: tensorflow::Input beta1_power, :: tensorflow::Input beta2_power, :: tensorflow::Input lr, :: tensorflow::Input beta1, :: tensorflow::Input beta2, :: tensorflow::Input epsilon, :: tensorflow::Input grad)
|
ResourceApplyAdamWithAmsgrad (const :: tensorflow::Scope & scope, :: tensorflow::Input var, :: tensorflow::Input m, :: tensorflow::Input v, :: tensorflow::Input vhat, :: tensorflow::Input beta1_power, :: tensorflow::Input beta2_power, :: tensorflow::Input lr, :: tensorflow::Input beta1, :: tensorflow::Input beta2, :: tensorflow::Input epsilon, :: tensorflow::Input grad, const ResourceApplyAdamWithAmsgrad::Attrs & attrs) |
Attributi pubblici
Funzioni pubbliche
ResourceApplyAdamWithAmsgrad
ResourceApplyAdamWithAmsgrad(
const ::tensorflow::Scope & scope,
::tensorflow::Input var,
::tensorflow::Input m,
::tensorflow::Input v,
::tensorflow::Input vhat,
::tensorflow::Input beta1_power,
::tensorflow::Input beta2_power,
::tensorflow::Input lr,
::tensorflow::Input beta1,
::tensorflow::Input beta2,
::tensorflow::Input epsilon,
::tensorflow::Input grad
)
ResourceApplyAdamWithAmsgrad
ResourceApplyAdamWithAmsgrad(
const ::tensorflow::Scope & scope,
::tensorflow::Input var,
::tensorflow::Input m,
::tensorflow::Input v,
::tensorflow::Input vhat,
::tensorflow::Input beta1_power,
::tensorflow::Input beta2_power,
::tensorflow::Input lr,
::tensorflow::Input beta1,
::tensorflow::Input beta2,
::tensorflow::Input epsilon,
::tensorflow::Input grad,
const ResourceApplyAdamWithAmsgrad::Attrs & attrs
)
operator::tensorflow::Operazione
operator::tensorflow::Operation() const
Funzioni pubbliche statiche
UsaLocking
Attrs UseLocking(
bool x
)
Salvo quando diversamente specificato, i contenuti di questa pagina sono concessi in base alla licenza Creative Commons Attribution 4.0, mentre gli esempi di codice sono concessi in base alla licenza Apache 2.0. Per ulteriori dettagli, consulta le norme del sito di Google Developers. Java è un marchio registrato di Oracle e/o delle sue consociate.
Ultimo aggiornamento 2025-07-26 UTC.
[null,null,["Ultimo aggiornamento 2025-07-26 UTC."],[],[],null,["# tensorflow::ops::ResourceApplyAdamWithAmsgrad Class Reference\n\ntensorflow::ops::ResourceApplyAdamWithAmsgrad\n=============================================\n\n`#include \u003ctraining_ops.h\u003e`\n\nUpdate '\\*var' according to the Adam algorithm.\n\nSummary\n-------\n\n$$lr_t := {learning_rate} \\* {1 - beta_2\\^t} / (1 - beta_1\\^t)$$ $$m_t := beta_1 \\* m_{t-1} + (1 - beta_1) \\* g$$ $$v_t := beta_2 \\* v_{t-1} + (1 - beta_2) \\* g \\* g$$ $$vhat_t := max{vhat_{t-1}, v_t}$$ $$variable := variable - lr_t \\* m_t / ({vhat_t} + )$$\n\nArguments:\n\n- scope: A [Scope](/versions/r2.0/api_docs/cc/class/tensorflow/scope#classtensorflow_1_1_scope) object\n- var: Should be from a Variable().\n- m: Should be from a Variable().\n- v: Should be from a Variable().\n- vhat: Should be from a Variable().\n- beta1_power: Must be a scalar.\n- beta2_power: Must be a scalar.\n- lr: Scaling factor. Must be a scalar.\n- beta1: Momentum factor. Must be a scalar.\n- beta2: Momentum factor. Must be a scalar.\n- epsilon: Ridge term. Must be a scalar.\n- grad: The gradient.\n\n\u003cbr /\u003e\n\nOptional attributes (see [Attrs](/versions/r2.0/api_docs/cc/struct/tensorflow/ops/resource-apply-adam-with-amsgrad/attrs#structtensorflow_1_1ops_1_1_resource_apply_adam_with_amsgrad_1_1_attrs)):\n\n- use_locking: If `True`, updating of the var, m, and v tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.\n\n\u003cbr /\u003e\n\nReturns:\n\n- the created [Operation](/versions/r2.0/api_docs/cc/class/tensorflow/operation#classtensorflow_1_1_operation)\n\n\u003cbr /\u003e\n\n| ### Constructors and Destructors ||\n|---|---|\n| [ResourceApplyAdamWithAmsgrad](#classtensorflow_1_1ops_1_1_resource_apply_adam_with_amsgrad_1a76c2fd9a089e3d00f1c5efc4948dcfa8)`(const ::`[tensorflow::Scope](/versions/r2.0/api_docs/cc/class/tensorflow/scope#classtensorflow_1_1_scope)` & scope, ::`[tensorflow::Input](/versions/r2.0/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` var, ::`[tensorflow::Input](/versions/r2.0/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` m, ::`[tensorflow::Input](/versions/r2.0/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` v, ::`[tensorflow::Input](/versions/r2.0/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` vhat, ::`[tensorflow::Input](/versions/r2.0/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` beta1_power, ::`[tensorflow::Input](/versions/r2.0/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` beta2_power, ::`[tensorflow::Input](/versions/r2.0/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` lr, ::`[tensorflow::Input](/versions/r2.0/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` beta1, ::`[tensorflow::Input](/versions/r2.0/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` beta2, ::`[tensorflow::Input](/versions/r2.0/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` epsilon, ::`[tensorflow::Input](/versions/r2.0/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` grad)` ||\n| [ResourceApplyAdamWithAmsgrad](#classtensorflow_1_1ops_1_1_resource_apply_adam_with_amsgrad_1a875458300576d2e270cddcffa71e079c)`(const ::`[tensorflow::Scope](/versions/r2.0/api_docs/cc/class/tensorflow/scope#classtensorflow_1_1_scope)` & scope, ::`[tensorflow::Input](/versions/r2.0/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` var, ::`[tensorflow::Input](/versions/r2.0/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` m, ::`[tensorflow::Input](/versions/r2.0/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` v, ::`[tensorflow::Input](/versions/r2.0/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` vhat, ::`[tensorflow::Input](/versions/r2.0/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` beta1_power, ::`[tensorflow::Input](/versions/r2.0/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` beta2_power, ::`[tensorflow::Input](/versions/r2.0/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` lr, ::`[tensorflow::Input](/versions/r2.0/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` beta1, ::`[tensorflow::Input](/versions/r2.0/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` beta2, ::`[tensorflow::Input](/versions/r2.0/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` epsilon, ::`[tensorflow::Input](/versions/r2.0/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` grad, const `[ResourceApplyAdamWithAmsgrad::Attrs](/versions/r2.0/api_docs/cc/struct/tensorflow/ops/resource-apply-adam-with-amsgrad/attrs#structtensorflow_1_1ops_1_1_resource_apply_adam_with_amsgrad_1_1_attrs)` & attrs)` ||\n\n| ### Public attributes ||\n|--------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------|\n| [operation](#classtensorflow_1_1ops_1_1_resource_apply_adam_with_amsgrad_1a56a6e586ba373ea479c1ec80ebdbb5fa) | [Operation](/versions/r2.0/api_docs/cc/class/tensorflow/operation#classtensorflow_1_1_operation) |\n\n| ### Public functions ||\n|-----------------------------------------------------------------------------------------------------------------------------------------------|---------|\n| [operator::tensorflow::Operation](#classtensorflow_1_1ops_1_1_resource_apply_adam_with_amsgrad_1a8171cbe4e65ce2472e54d8cf14349a6d)`() const ` | ` ` ` ` |\n\n| ### Public static functions ||\n|-------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [UseLocking](#classtensorflow_1_1ops_1_1_resource_apply_adam_with_amsgrad_1a86eb3692613e7db3ca54090c9f22c353)`(bool x)` | [Attrs](/versions/r2.0/api_docs/cc/struct/tensorflow/ops/resource-apply-adam-with-amsgrad/attrs#structtensorflow_1_1ops_1_1_resource_apply_adam_with_amsgrad_1_1_attrs) |\n\n| ### Structs ||\n|-------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [tensorflow::ops::ResourceApplyAdamWithAmsgrad::Attrs](/versions/r2.0/api_docs/cc/struct/tensorflow/ops/resource-apply-adam-with-amsgrad/attrs) | Optional attribute setters for [ResourceApplyAdamWithAmsgrad](/versions/r2.0/api_docs/cc/class/tensorflow/ops/resource-apply-adam-with-amsgrad#classtensorflow_1_1ops_1_1_resource_apply_adam_with_amsgrad). |\n\nPublic attributes\n-----------------\n\n### operation\n\n```text\nOperation operation\n``` \n\nPublic functions\n----------------\n\n### ResourceApplyAdamWithAmsgrad\n\n```gdscript\n ResourceApplyAdamWithAmsgrad(\n const ::tensorflow::Scope & scope,\n ::tensorflow::Input var,\n ::tensorflow::Input m,\n ::tensorflow::Input v,\n ::tensorflow::Input vhat,\n ::tensorflow::Input beta1_power,\n ::tensorflow::Input beta2_power,\n ::tensorflow::Input lr,\n ::tensorflow::Input beta1,\n ::tensorflow::Input beta2,\n ::tensorflow::Input epsilon,\n ::tensorflow::Input grad\n)\n``` \n\n### ResourceApplyAdamWithAmsgrad\n\n```gdscript\n ResourceApplyAdamWithAmsgrad(\n const ::tensorflow::Scope & scope,\n ::tensorflow::Input var,\n ::tensorflow::Input m,\n ::tensorflow::Input v,\n ::tensorflow::Input vhat,\n ::tensorflow::Input beta1_power,\n ::tensorflow::Input beta2_power,\n ::tensorflow::Input lr,\n ::tensorflow::Input beta1,\n ::tensorflow::Input beta2,\n ::tensorflow::Input epsilon,\n ::tensorflow::Input grad,\n const ResourceApplyAdamWithAmsgrad::Attrs & attrs\n)\n``` \n\n### operator::tensorflow::Operation\n\n```gdscript\n operator::tensorflow::Operation() const \n``` \n\nPublic static functions\n-----------------------\n\n### UseLocking\n\n```text\nAttrs UseLocking(\n bool x\n)\n```"]]