Organízate con las colecciones
Guarda y clasifica el contenido según tus preferencias.
flujo tensor:: operaciones:: AplicarAdán
#include <training_ops.h>
Actualice '*var' según el algoritmo de Adam.
Resumen
$$lr_t := {learning_rate} * {1 - beta_2^t} / (1 - beta_1^t)$$
$$m_t := beta_1 * m_{t-1} + (1 - beta_1) * g$$
$$v_t := beta_2 * v_{t-1} + (1 - beta_2) * g * g$$
$$variable := variable - lr_t * m_t / ({v_t} + )$$
Argumentos:
- alcance: un objeto de alcance
- var: debe ser de una variable().
- m: Debe ser de una Variable().
- v: Debe ser de una Variable().
- beta1_power: debe ser un escalar.
- beta2_power: debe ser un escalar.
- lr: Factor de escala. Debe ser un escalar.
- beta1: factor de impulso. Debe ser un escalar.
- beta2: factor de impulso. Debe ser un escalar.
- épsilon: término de cresta. Debe ser un escalar.
- grad: El gradiente.
Atributos opcionales (ver Attrs
):
- use_locking: si es
True
, la actualización de los tensores var, m y v estará protegida por un bloqueo; de lo contrario, el comportamiento no está definido, pero puede presentar menos contención. - use_nesterov: si es
True
, utiliza la actualización de Nesterov.
Devoluciones:
Constructores y destructores |
---|
ApplyAdam (const :: tensorflow::Scope & scope, :: tensorflow::Input var, :: tensorflow::Input m, :: tensorflow::Input v, :: tensorflow::Input beta1_power, :: tensorflow::Input beta2_power, :: tensorflow::Input lr, :: tensorflow::Input beta1, :: tensorflow::Input beta2, :: tensorflow::Input epsilon, :: tensorflow::Input grad)
|
ApplyAdam (const :: tensorflow::Scope & scope, :: tensorflow::Input var, :: tensorflow::Input m, :: tensorflow::Input v, :: tensorflow::Input beta1_power, :: tensorflow::Input beta2_power, :: tensorflow::Input lr, :: tensorflow::Input beta1, :: tensorflow::Input beta2, :: tensorflow::Input epsilon, :: tensorflow::Input grad, const ApplyAdam::Attrs & attrs) |
Atributos públicos
Funciones públicas
AplicarAdán
ApplyAdam(
const ::tensorflow::Scope & scope,
::tensorflow::Input var,
::tensorflow::Input m,
::tensorflow::Input v,
::tensorflow::Input beta1_power,
::tensorflow::Input beta2_power,
::tensorflow::Input lr,
::tensorflow::Input beta1,
::tensorflow::Input beta2,
::tensorflow::Input epsilon,
::tensorflow::Input grad
)
AplicarAdán
ApplyAdam(
const ::tensorflow::Scope & scope,
::tensorflow::Input var,
::tensorflow::Input m,
::tensorflow::Input v,
::tensorflow::Input beta1_power,
::tensorflow::Input beta2_power,
::tensorflow::Input lr,
::tensorflow::Input beta1,
::tensorflow::Input beta2,
::tensorflow::Input epsilon,
::tensorflow::Input grad,
const ApplyAdam::Attrs & attrs
)
nodo
::tensorflow::Node * node() const
operator::tensorflow::Input() const
operador::tensorflow::Salida
operator::tensorflow::Output() const
Funciones estáticas públicas
UsoBloqueo
Attrs UseLocking(
bool x
)
UsoNesterov
Attrs UseNesterov(
bool x
)
A menos que se indique lo contrario, el contenido de esta página está sujeto a la licencia Reconocimiento 4.0 de Creative Commons y las muestras de código están sujetas a la licencia Apache 2.0. Para obtener más información, consulta las políticas del sitio web de Google Developers. Java es una marca registrada de Oracle o sus afiliados.
Última actualización: 2025-07-27 (UTC).
[null,null,["Última actualización: 2025-07-27 (UTC)."],[],[],null,["# tensorflow::ops::ApplyAdam Class Reference\n\ntensorflow::ops::ApplyAdam\n==========================\n\n`#include \u003ctraining_ops.h\u003e`\n\nUpdate '\\*var' according to the Adam algorithm.\n\nSummary\n-------\n\n\u003cbr /\u003e\n\n$$lr_t := {learning_rate} \\* {1 - beta_2\\^t} / (1 - beta_1\\^t)$$ \n$$m_t := beta_1 \\* m_{t-1} + (1 - beta_1) \\* g$$ \n$$v_t := beta_2 \\* v_{t-1} + (1 - beta_2) \\* g \\* g$$ \n$$variable := variable - lr_t \\* m_t / ({v_t} + )$$\n\n\u003cbr /\u003e\n\nArguments:\n\n- scope: A [Scope](/versions/r2.2/api_docs/cc/class/tensorflow/scope#classtensorflow_1_1_scope) object\n- var: Should be from a Variable().\n- m: Should be from a Variable().\n- v: Should be from a Variable().\n- beta1_power: Must be a scalar.\n- beta2_power: Must be a scalar.\n- lr: Scaling factor. Must be a scalar.\n- beta1: Momentum factor. Must be a scalar.\n- beta2: Momentum factor. Must be a scalar.\n- epsilon: Ridge term. Must be a scalar.\n- grad: The gradient.\n\n\u003cbr /\u003e\n\nOptional attributes (see [Attrs](/versions/r2.2/api_docs/cc/struct/tensorflow/ops/apply-adam/attrs#structtensorflow_1_1ops_1_1_apply_adam_1_1_attrs)):\n\n- use_locking: If `True`, updating of the var, m, and v tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.\n- use_nesterov: If `True`, uses the nesterov update.\n\n\u003cbr /\u003e\n\nReturns:\n\n- [Output](/versions/r2.2/api_docs/cc/class/tensorflow/output#classtensorflow_1_1_output): Same as \"var\".\n\n\u003cbr /\u003e\n\n| ### Constructors and Destructors ||\n|---|---|\n| [ApplyAdam](#classtensorflow_1_1ops_1_1_apply_adam_1a63f38ab9210b19bbb905e9d494fd0d7c)`(const ::`[tensorflow::Scope](/versions/r2.2/api_docs/cc/class/tensorflow/scope#classtensorflow_1_1_scope)` & scope, ::`[tensorflow::Input](/versions/r2.2/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` var, ::`[tensorflow::Input](/versions/r2.2/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` m, ::`[tensorflow::Input](/versions/r2.2/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` v, ::`[tensorflow::Input](/versions/r2.2/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` beta1_power, ::`[tensorflow::Input](/versions/r2.2/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` beta2_power, ::`[tensorflow::Input](/versions/r2.2/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` lr, ::`[tensorflow::Input](/versions/r2.2/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` beta1, ::`[tensorflow::Input](/versions/r2.2/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` beta2, ::`[tensorflow::Input](/versions/r2.2/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` epsilon, ::`[tensorflow::Input](/versions/r2.2/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` grad)` ||\n| [ApplyAdam](#classtensorflow_1_1ops_1_1_apply_adam_1a23c9c116f231e976487216fbf1d880dd)`(const ::`[tensorflow::Scope](/versions/r2.2/api_docs/cc/class/tensorflow/scope#classtensorflow_1_1_scope)` & scope, ::`[tensorflow::Input](/versions/r2.2/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` var, ::`[tensorflow::Input](/versions/r2.2/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` m, ::`[tensorflow::Input](/versions/r2.2/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` v, ::`[tensorflow::Input](/versions/r2.2/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` beta1_power, ::`[tensorflow::Input](/versions/r2.2/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` beta2_power, ::`[tensorflow::Input](/versions/r2.2/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` lr, ::`[tensorflow::Input](/versions/r2.2/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` beta1, ::`[tensorflow::Input](/versions/r2.2/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` beta2, ::`[tensorflow::Input](/versions/r2.2/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` epsilon, ::`[tensorflow::Input](/versions/r2.2/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` grad, const `[ApplyAdam::Attrs](/versions/r2.2/api_docs/cc/struct/tensorflow/ops/apply-adam/attrs#structtensorflow_1_1ops_1_1_apply_adam_1_1_attrs)` & attrs)` ||\n\n| ### Public attributes ||\n|----------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------|\n| [operation](#classtensorflow_1_1ops_1_1_apply_adam_1a56261a4d240b654e6a61c42931d3b847) | [Operation](/versions/r2.2/api_docs/cc/class/tensorflow/operation#classtensorflow_1_1_operation) |\n| [out](#classtensorflow_1_1ops_1_1_apply_adam_1a51b86c03755b5fa8584c9228a13594d2) | `::`[tensorflow::Output](/versions/r2.2/api_docs/cc/class/tensorflow/output#classtensorflow_1_1_output) |\n\n| ### Public functions ||\n|----------------------------------------------------------------------------------------------------------------------|------------------------|\n| [node](#classtensorflow_1_1ops_1_1_apply_adam_1aca5ba972ba714c19db3728d2dab29a8e)`() const ` | `::tensorflow::Node *` |\n| [operator::tensorflow::Input](#classtensorflow_1_1ops_1_1_apply_adam_1a8e17d2267864bd25f3ab523a287abb8a)`() const ` | ` ` ` ` |\n| [operator::tensorflow::Output](#classtensorflow_1_1ops_1_1_apply_adam_1a596890f0d578e64a6f55540e249ca5c8)`() const ` | ` ` ` ` |\n\n| ### Public static functions ||\n|----------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------|\n| [UseLocking](#classtensorflow_1_1ops_1_1_apply_adam_1adf5062f34d44b504f428d128fcfecf94)`(bool x)` | [Attrs](/versions/r2.2/api_docs/cc/struct/tensorflow/ops/apply-adam/attrs#structtensorflow_1_1ops_1_1_apply_adam_1_1_attrs) |\n| [UseNesterov](#classtensorflow_1_1ops_1_1_apply_adam_1ae368b25e083d00d0d74551be052064c3)`(bool x)` | [Attrs](/versions/r2.2/api_docs/cc/struct/tensorflow/ops/apply-adam/attrs#structtensorflow_1_1ops_1_1_apply_adam_1_1_attrs) |\n\n| ### Structs ||\n|--------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------|\n| [tensorflow::ops::ApplyAdam::Attrs](/versions/r2.2/api_docs/cc/struct/tensorflow/ops/apply-adam/attrs) | Optional attribute setters for [ApplyAdam](/versions/r2.2/api_docs/cc/class/tensorflow/ops/apply-adam#classtensorflow_1_1ops_1_1_apply_adam). |\n\nPublic attributes\n-----------------\n\n### operation\n\n```text\nOperation operation\n``` \n\n### out\n\n```text\n::tensorflow::Output out\n``` \n\nPublic functions\n----------------\n\n### ApplyAdam\n\n```gdscript\n ApplyAdam(\n const ::tensorflow::Scope & scope,\n ::tensorflow::Input var,\n ::tensorflow::Input m,\n ::tensorflow::Input v,\n ::tensorflow::Input beta1_power,\n ::tensorflow::Input beta2_power,\n ::tensorflow::Input lr,\n ::tensorflow::Input beta1,\n ::tensorflow::Input beta2,\n ::tensorflow::Input epsilon,\n ::tensorflow::Input grad\n)\n``` \n\n### ApplyAdam\n\n```gdscript\n ApplyAdam(\n const ::tensorflow::Scope & scope,\n ::tensorflow::Input var,\n ::tensorflow::Input m,\n ::tensorflow::Input v,\n ::tensorflow::Input beta1_power,\n ::tensorflow::Input beta2_power,\n ::tensorflow::Input lr,\n ::tensorflow::Input beta1,\n ::tensorflow::Input beta2,\n ::tensorflow::Input epsilon,\n ::tensorflow::Input grad,\n const ApplyAdam::Attrs & attrs\n)\n``` \n\n### node\n\n```gdscript\n::tensorflow::Node * node() const \n``` \n\n### operator::tensorflow::Input\n\n```gdscript\n operator::tensorflow::Input() const \n``` \n\n### operator::tensorflow::Output\n\n```gdscript\n operator::tensorflow::Output() const \n``` \n\nPublic static functions\n-----------------------\n\n### UseLocking\n\n```text\nAttrs UseLocking(\n bool x\n)\n``` \n\n### UseNesterov\n\n```text\nAttrs UseNesterov(\n bool x\n)\n```"]]