Aprenda o que há de mais recente em aprendizado de máquina, IA generativa e muito mais no WiML Symposium 2023
Registre-se
Mantenha tudo organizado com as coleções
Salve e categorize o conteúdo com base nas suas preferências.
tensorflow :: ops :: ResourceSparseApplyAdagradDA
#include <training_ops.h>
Atualize as entradas em '* var' e '* acum' de acordo com o esquema adagrad proximal.
Resumo
Argumentos:
- escopo: um objeto Scope
- var: deve ser de uma variável ().
- gradiente_acumulador: deve ser de uma variável ().
- gradiente_squared_accumulator: deve ser de uma variável ().
- grad: O gradiente.
- índices: Um vetor de índices na primeira dimensão de var e de acum.
- lr: Taxa de aprendizagem. Deve ser um escalar.
- l1: regularização de L1. Deve ser um escalar.
- l2: regularização de L2. Deve ser um escalar.
- global_step: Número da etapa de treinamento. Deve ser um escalar.
Atributos opcionais (consulte Attrs
):
- use_locking: Se True, a atualização dos tensores var e Accum será protegida por um bloqueio; caso contrário, o comportamento é indefinido, mas pode exibir menos contenção.
Retorna:
Construtores e Destruidores |
---|
ResourceSparseApplyAdagradDA (const :: tensorflow::Scope & scope, :: tensorflow::Input var, :: tensorflow::Input gradient_accumulator, :: tensorflow::Input gradient_squared_accumulator, :: tensorflow::Input grad, :: tensorflow::Input indices, :: tensorflow::Input lr, :: tensorflow::Input l1, :: tensorflow::Input l2, :: tensorflow::Input global_step)
|
ResourceSparseApplyAdagradDA (const :: tensorflow::Scope & scope, :: tensorflow::Input var, :: tensorflow::Input gradient_accumulator, :: tensorflow::Input gradient_squared_accumulator, :: tensorflow::Input grad, :: tensorflow::Input indices, :: tensorflow::Input lr, :: tensorflow::Input l1, :: tensorflow::Input l2, :: tensorflow::Input global_step, const ResourceSparseApplyAdagradDA::Attrs & attrs) |
Atributos públicos
Funções públicas
ResourceSparseApplyAdagradDA
ResourceSparseApplyAdagradDA(
const ::tensorflow::Scope & scope,
::tensorflow::Input var,
::tensorflow::Input gradient_accumulator,
::tensorflow::Input gradient_squared_accumulator,
::tensorflow::Input grad,
::tensorflow::Input indices,
::tensorflow::Input lr,
::tensorflow::Input l1,
::tensorflow::Input l2,
::tensorflow::Input global_step,
const ResourceSparseApplyAdagradDA::Attrs & attrs
)
operador :: tensorflow :: Operação
operator::tensorflow::Operation() const
Funções estáticas públicas
UseLocking
Attrs UseLocking(
bool x
)
Exceto em caso de indicação contrária, o conteúdo desta página é licenciado de acordo com a Licença de atribuição 4.0 do Creative Commons, e as amostras de código são licenciadas de acordo com a Licença Apache 2.0. Para mais detalhes, consulte as políticas do site do Google Developers. Java é uma marca registrada da Oracle e/ou afiliadas.
Última atualização 2020-04-20 UTC.
[null,null,["Última atualização 2020-04-20 UTC."],[],[],null,["# tensorflow::ops::ResourceSparseApplyAdagradDA Class Reference\n\ntensorflow::ops::ResourceSparseApplyAdagradDA\n=============================================\n\n`#include \u003ctraining_ops.h\u003e`\n\nUpdate entries in '\\*var' and '\\*accum' according to the proximal adagrad scheme.\n\nSummary\n-------\n\nArguments:\n\n- scope: A [Scope](/versions/r1.15/api_docs/cc/class/tensorflow/scope#classtensorflow_1_1_scope) object\n- var: Should be from a Variable().\n- gradient_accumulator: Should be from a Variable().\n- gradient_squared_accumulator: Should be from a Variable().\n- grad: The gradient.\n- indices: A vector of indices into the first dimension of var and accum.\n- lr: Learning rate. Must be a scalar.\n- l1: L1 regularization. Must be a scalar.\n- l2: L2 regularization. Must be a scalar.\n- global_step: Training step number. Must be a scalar.\n\n\u003cbr /\u003e\n\nOptional attributes (see [Attrs](/versions/r1.15/api_docs/cc/struct/tensorflow/ops/resource-sparse-apply-adagrad-d-a/attrs#structtensorflow_1_1ops_1_1_resource_sparse_apply_adagrad_d_a_1_1_attrs)):\n\n- use_locking: If True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.\n\n\u003cbr /\u003e\n\nReturns:\n\n- the created [Operation](/versions/r1.15/api_docs/cc/class/tensorflow/operation#classtensorflow_1_1_operation)\n\n\u003cbr /\u003e\n\n| ### Constructors and Destructors ||\n|---|---|\n| [ResourceSparseApplyAdagradDA](#classtensorflow_1_1ops_1_1_resource_sparse_apply_adagrad_d_a_1a73ffdbfa10ec272bfe2b93f579109c28)`(const ::`[tensorflow::Scope](/versions/r1.15/api_docs/cc/class/tensorflow/scope#classtensorflow_1_1_scope)` & scope, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` var, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` gradient_accumulator, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` gradient_squared_accumulator, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` grad, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` indices, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` lr, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` l1, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` l2, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` global_step)` ||\n| [ResourceSparseApplyAdagradDA](#classtensorflow_1_1ops_1_1_resource_sparse_apply_adagrad_d_a_1aae8dd33efc6b892fa94673149d533534)`(const ::`[tensorflow::Scope](/versions/r1.15/api_docs/cc/class/tensorflow/scope#classtensorflow_1_1_scope)` & scope, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` var, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` gradient_accumulator, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` gradient_squared_accumulator, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` grad, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` indices, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` lr, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` l1, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` l2, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` global_step, const `[ResourceSparseApplyAdagradDA::Attrs](/versions/r1.15/api_docs/cc/struct/tensorflow/ops/resource-sparse-apply-adagrad-d-a/attrs#structtensorflow_1_1ops_1_1_resource_sparse_apply_adagrad_d_a_1_1_attrs)` & attrs)` ||\n\n| ### Public attributes ||\n|---------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------|\n| [operation](#classtensorflow_1_1ops_1_1_resource_sparse_apply_adagrad_d_a_1af19182343e1d08847c2cf51d1ae38840) | [Operation](/versions/r1.15/api_docs/cc/class/tensorflow/operation#classtensorflow_1_1_operation) |\n\n| ### Public functions ||\n|------------------------------------------------------------------------------------------------------------------------------------------------|---------|\n| [operator::tensorflow::Operation](#classtensorflow_1_1ops_1_1_resource_sparse_apply_adagrad_d_a_1a2d06e93c04e37fbab24141d187d63698)`() const ` | ` ` ` ` |\n\n| ### Public static functions ||\n|--------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [UseLocking](#classtensorflow_1_1ops_1_1_resource_sparse_apply_adagrad_d_a_1aef5e4815d60b62a5862a6a8427f5d3ae)`(bool x)` | [Attrs](/versions/r1.15/api_docs/cc/struct/tensorflow/ops/resource-sparse-apply-adagrad-d-a/attrs#structtensorflow_1_1ops_1_1_resource_sparse_apply_adagrad_d_a_1_1_attrs) |\n\n| ### Structs ||\n|---------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [tensorflow::ops::ResourceSparseApplyAdagradDA::Attrs](/versions/r1.15/api_docs/cc/struct/tensorflow/ops/resource-sparse-apply-adagrad-d-a/attrs) | Optional attribute setters for [ResourceSparseApplyAdagradDA](/versions/r1.15/api_docs/cc/class/tensorflow/ops/resource-sparse-apply-adagrad-d-a#classtensorflow_1_1ops_1_1_resource_sparse_apply_adagrad_d_a). |\n\nPublic attributes\n-----------------\n\n### operation\n\n```text\nOperation operation\n``` \n\nPublic functions\n----------------\n\n### ResourceSparseApplyAdagradDA\n\n```gdscript\n ResourceSparseApplyAdagradDA(\n const ::tensorflow::Scope & scope,\n ::tensorflow::Input var,\n ::tensorflow::Input gradient_accumulator,\n ::tensorflow::Input gradient_squared_accumulator,\n ::tensorflow::Input grad,\n ::tensorflow::Input indices,\n ::tensorflow::Input lr,\n ::tensorflow::Input l1,\n ::tensorflow::Input l2,\n ::tensorflow::Input global_step\n)\n``` \n\n### ResourceSparseApplyAdagradDA\n\n```gdscript\n ResourceSparseApplyAdagradDA(\n const ::tensorflow::Scope & scope,\n ::tensorflow::Input var,\n ::tensorflow::Input gradient_accumulator,\n ::tensorflow::Input gradient_squared_accumulator,\n ::tensorflow::Input grad,\n ::tensorflow::Input indices,\n ::tensorflow::Input lr,\n ::tensorflow::Input l1,\n ::tensorflow::Input l2,\n ::tensorflow::Input global_step,\n const ResourceSparseApplyAdagradDA::Attrs & attrs\n)\n``` \n\n### operator::tensorflow::Operation\n\n```gdscript\n operator::tensorflow::Operation() const \n``` \n\nPublic static functions\n-----------------------\n\n### UseLocking\n\n```text\nAttrs UseLocking(\n bool x\n)\n```"]]