Organízate con las colecciones
Guarda y clasifica el contenido según tus preferencias.
flujo tensor:: operaciones:: AplicarAdagradDA
#include <training_ops.h>
Actualice '*var' según el esquema de adagrad proximal.
Resumen
Argumentos:
- alcance: un objeto de alcance
- var: debe ser de una variable().
- gradient_accumulator: debe ser de una variable ().
- gradient_squared_accumulator: debe ser de una variable ().
- grad: El gradiente.
- lr: Factor de escala. Debe ser un escalar.
- l1: regularización L1. Debe ser un escalar.
- l2: regularización L2. Debe ser un escalar.
- global_step: número del paso de entrenamiento. Debe ser un escalar.
Atributos opcionales (ver Attrs
):
- use_locking: si es Verdadero, la actualización de los tensores var y accum estará protegida por un bloqueo; de lo contrario, el comportamiento no está definido, pero puede presentar menos contención.
Devoluciones:
Constructores y destructores |
---|
ApplyAdagradDA (const :: tensorflow::Scope & scope, :: tensorflow::Input var, :: tensorflow::Input gradient_accumulator, :: tensorflow::Input gradient_squared_accumulator, :: tensorflow::Input grad, :: tensorflow::Input lr, :: tensorflow::Input l1, :: tensorflow::Input l2, :: tensorflow::Input global_step)
|
ApplyAdagradDA (const :: tensorflow::Scope & scope, :: tensorflow::Input var, :: tensorflow::Input gradient_accumulator, :: tensorflow::Input gradient_squared_accumulator, :: tensorflow::Input grad, :: tensorflow::Input lr, :: tensorflow::Input l1, :: tensorflow::Input l2, :: tensorflow::Input global_step, const ApplyAdagradDA::Attrs & attrs) |
Atributos públicos
Funciones públicas
nodo
::tensorflow::Node * node() const
operator::tensorflow::Input() const
operador::tensorflow::Salida
operator::tensorflow::Output() const
Funciones estáticas públicas
UsoBloqueo
Attrs UseLocking(
bool x
)
A menos que se indique lo contrario, el contenido de esta página está sujeto a la licencia Reconocimiento 4.0 de Creative Commons y las muestras de código están sujetas a la licencia Apache 2.0. Para obtener más información, consulta las políticas del sitio web de Google Developers. Java es una marca registrada de Oracle o sus afiliados.
Última actualización: 2025-07-26 (UTC).
[null,null,["Última actualización: 2025-07-26 (UTC)."],[],[],null,["# tensorflow::ops::ApplyAdagradDA Class Reference\n\ntensorflow::ops::ApplyAdagradDA\n===============================\n\n`#include \u003ctraining_ops.h\u003e`\n\nUpdate '\\*var' according to the proximal adagrad scheme.\n\nSummary\n-------\n\nArguments:\n\n- scope: A [Scope](/versions/r1.15/api_docs/cc/class/tensorflow/scope#classtensorflow_1_1_scope) object\n- var: Should be from a Variable().\n- gradient_accumulator: Should be from a Variable().\n- gradient_squared_accumulator: Should be from a Variable().\n- grad: The gradient.\n- lr: Scaling factor. Must be a scalar.\n- l1: L1 regularization. Must be a scalar.\n- l2: L2 regularization. Must be a scalar.\n- global_step: Training step number. Must be a scalar.\n\n\u003cbr /\u003e\n\nOptional attributes (see [Attrs](/versions/r1.15/api_docs/cc/struct/tensorflow/ops/apply-adagrad-d-a/attrs#structtensorflow_1_1ops_1_1_apply_adagrad_d_a_1_1_attrs)):\n\n- use_locking: If True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.\n\n\u003cbr /\u003e\n\nReturns:\n\n- [Output](/versions/r1.15/api_docs/cc/class/tensorflow/output#classtensorflow_1_1_output): Same as \"var\".\n\n\u003cbr /\u003e\n\n| ### Constructors and Destructors ||\n|---|---|\n| [ApplyAdagradDA](#classtensorflow_1_1ops_1_1_apply_adagrad_d_a_1a9717622961f444da4444a7cad85c1147)`(const ::`[tensorflow::Scope](/versions/r1.15/api_docs/cc/class/tensorflow/scope#classtensorflow_1_1_scope)` & scope, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` var, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` gradient_accumulator, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` gradient_squared_accumulator, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` grad, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` lr, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` l1, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` l2, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` global_step)` ||\n| [ApplyAdagradDA](#classtensorflow_1_1ops_1_1_apply_adagrad_d_a_1a0176953b80b50c379313cad4ace5ee5e)`(const ::`[tensorflow::Scope](/versions/r1.15/api_docs/cc/class/tensorflow/scope#classtensorflow_1_1_scope)` & scope, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` var, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` gradient_accumulator, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` gradient_squared_accumulator, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` grad, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` lr, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` l1, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` l2, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` global_step, const `[ApplyAdagradDA::Attrs](/versions/r1.15/api_docs/cc/struct/tensorflow/ops/apply-adagrad-d-a/attrs#structtensorflow_1_1ops_1_1_apply_adagrad_d_a_1_1_attrs)` & attrs)` ||\n\n| ### Public attributes ||\n|-----------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------|\n| [operation](#classtensorflow_1_1ops_1_1_apply_adagrad_d_a_1aeb5c4fba5cf1669a64c356f8beb3f37a) | [Operation](/versions/r1.15/api_docs/cc/class/tensorflow/operation#classtensorflow_1_1_operation) |\n| [out](#classtensorflow_1_1ops_1_1_apply_adagrad_d_a_1aa81832322b402afc32afca0e2663ba26) | `::`[tensorflow::Output](/versions/r1.15/api_docs/cc/class/tensorflow/output#classtensorflow_1_1_output) |\n\n| ### Public functions ||\n|-----------------------------------------------------------------------------------------------------------------------------|------------------------|\n| [node](#classtensorflow_1_1ops_1_1_apply_adagrad_d_a_1a6018e2f78356d28e62d64284d1da7e04)`() const ` | `::tensorflow::Node *` |\n| [operator::tensorflow::Input](#classtensorflow_1_1ops_1_1_apply_adagrad_d_a_1a6561d70fc94fe24224939f3680880f4b)`() const ` | ` ` ` ` |\n| [operator::tensorflow::Output](#classtensorflow_1_1ops_1_1_apply_adagrad_d_a_1a50b9e5a00627be0d50ac540b4a762ed1)`() const ` | ` ` ` ` |\n\n| ### Public static functions ||\n|----------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------|\n| [UseLocking](#classtensorflow_1_1ops_1_1_apply_adagrad_d_a_1afef1833b1630afd75a5b5c41a39b2ed1)`(bool x)` | [Attrs](/versions/r1.15/api_docs/cc/struct/tensorflow/ops/apply-adagrad-d-a/attrs#structtensorflow_1_1ops_1_1_apply_adagrad_d_a_1_1_attrs) |\n\n| ### Structs ||\n|---------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [tensorflow::ops::ApplyAdagradDA::Attrs](/versions/r1.15/api_docs/cc/struct/tensorflow/ops/apply-adagrad-d-a/attrs) | Optional attribute setters for [ApplyAdagradDA](/versions/r1.15/api_docs/cc/class/tensorflow/ops/apply-adagrad-d-a#classtensorflow_1_1ops_1_1_apply_adagrad_d_a). |\n\nPublic attributes\n-----------------\n\n### operation\n\n```text\nOperation operation\n``` \n\n### out\n\n```text\n::tensorflow::Output out\n``` \n\nPublic functions\n----------------\n\n### ApplyAdagradDA\n\n```gdscript\n ApplyAdagradDA(\n const ::tensorflow::Scope & scope,\n ::tensorflow::Input var,\n ::tensorflow::Input gradient_accumulator,\n ::tensorflow::Input gradient_squared_accumulator,\n ::tensorflow::Input grad,\n ::tensorflow::Input lr,\n ::tensorflow::Input l1,\n ::tensorflow::Input l2,\n ::tensorflow::Input global_step\n)\n``` \n\n### ApplyAdagradDA\n\n```gdscript\n ApplyAdagradDA(\n const ::tensorflow::Scope & scope,\n ::tensorflow::Input var,\n ::tensorflow::Input gradient_accumulator,\n ::tensorflow::Input gradient_squared_accumulator,\n ::tensorflow::Input grad,\n ::tensorflow::Input lr,\n ::tensorflow::Input l1,\n ::tensorflow::Input l2,\n ::tensorflow::Input global_step,\n const ApplyAdagradDA::Attrs & attrs\n)\n``` \n\n### node\n\n```gdscript\n::tensorflow::Node * node() const \n``` \n\n### operator::tensorflow::Input\n\n```gdscript\n operator::tensorflow::Input() const \n``` \n\n### operator::tensorflow::Output\n\n```gdscript\n operator::tensorflow::Output() const \n``` \n\nPublic static functions\n-----------------------\n\n### UseLocking\n\n```text\nAttrs UseLocking(\n bool x\n)\n```"]]