컬렉션을 사용해 정리하기
내 환경설정을 기준으로 콘텐츠를 저장하고 분류하세요.
텐서플로우:: 작전:: 리소스ApplyAdagradDA
#include <training_ops.h>
근위부 adagrad 체계에 따라 '*var'를 업데이트합니다.
요약
인수:
- 범위: 범위 개체
- var: Variable()에서 가져와야 합니다.
- gradient_accumulator: Variable()에서 가져와야 합니다.
- gradient_squared_accumulator: Variable()에서 가져와야 합니다.
- grad: 그라데이션입니다.
- lr: 스케일링 팩터. 스칼라여야 합니다.
- l1: L1 정규화. 스칼라여야 합니다.
- l2: L2 정규화. 스칼라여야 합니다.
- global_step: 훈련 단계 번호. 스칼라여야 합니다.
선택적 속성( Attrs
참조):
- use_locking: True인 경우 var 및 accum 텐서 업데이트는 잠금으로 보호됩니다. 그렇지 않으면 동작이 정의되지 않지만 경합이 덜 나타날 수 있습니다.
보고:
생성자와 소멸자 |
---|
ResourceApplyAdagradDA (const :: tensorflow::Scope & scope, :: tensorflow::Input var, :: tensorflow::Input gradient_accumulator, :: tensorflow::Input gradient_squared_accumulator, :: tensorflow::Input grad, :: tensorflow::Input lr, :: tensorflow::Input l1, :: tensorflow::Input l2, :: tensorflow::Input global_step)
|
ResourceApplyAdagradDA (const :: tensorflow::Scope & scope, :: tensorflow::Input var, :: tensorflow::Input gradient_accumulator, :: tensorflow::Input gradient_squared_accumulator, :: tensorflow::Input grad, :: tensorflow::Input lr, :: tensorflow::Input l1, :: tensorflow::Input l2, :: tensorflow::Input global_step, const ResourceApplyAdagradDA::Attrs & attrs) |
공개 속성
공공 기능
연산자::텐서플로우::작업
operator::tensorflow::Operation() const
공개 정적 함수
사용잠금
Attrs UseLocking(
bool x
)
달리 명시되지 않는 한 이 페이지의 콘텐츠에는 Creative Commons Attribution 4.0 라이선스에 따라 라이선스가 부여되며, 코드 샘플에는 Apache 2.0 라이선스에 따라 라이선스가 부여됩니다. 자세한 내용은 Google Developers 사이트 정책을 참조하세요. 자바는 Oracle 및/또는 Oracle 계열사의 등록 상표입니다.
최종 업데이트: 2025-07-26(UTC)
[null,null,["최종 업데이트: 2025-07-26(UTC)"],[],[],null,["# tensorflow::ops::ResourceApplyAdagradDA Class Reference\n\ntensorflow::ops::ResourceApplyAdagradDA\n=======================================\n\n`#include \u003ctraining_ops.h\u003e`\n\nUpdate '\\*var' according to the proximal adagrad scheme.\n\nSummary\n-------\n\nArguments:\n\n- scope: A [Scope](/versions/r2.2/api_docs/cc/class/tensorflow/scope#classtensorflow_1_1_scope) object\n- var: Should be from a Variable().\n- gradient_accumulator: Should be from a Variable().\n- gradient_squared_accumulator: Should be from a Variable().\n- grad: The gradient.\n- lr: Scaling factor. Must be a scalar.\n- l1: L1 regularization. Must be a scalar.\n- l2: L2 regularization. Must be a scalar.\n- global_step: Training step number. Must be a scalar.\n\n\u003cbr /\u003e\n\nOptional attributes (see [Attrs](/versions/r2.2/api_docs/cc/struct/tensorflow/ops/resource-apply-adagrad-d-a/attrs#structtensorflow_1_1ops_1_1_resource_apply_adagrad_d_a_1_1_attrs)):\n\n- use_locking: If True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.\n\n\u003cbr /\u003e\n\nReturns:\n\n- the created [Operation](/versions/r2.2/api_docs/cc/class/tensorflow/operation#classtensorflow_1_1_operation)\n\n\u003cbr /\u003e\n\n| ### Constructors and Destructors ||\n|---|---|\n| [ResourceApplyAdagradDA](#classtensorflow_1_1ops_1_1_resource_apply_adagrad_d_a_1ab3cf31032694d9f3fd0a3958f78ec0f5)`(const ::`[tensorflow::Scope](/versions/r2.2/api_docs/cc/class/tensorflow/scope#classtensorflow_1_1_scope)` & scope, ::`[tensorflow::Input](/versions/r2.2/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` var, ::`[tensorflow::Input](/versions/r2.2/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` gradient_accumulator, ::`[tensorflow::Input](/versions/r2.2/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` gradient_squared_accumulator, ::`[tensorflow::Input](/versions/r2.2/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` grad, ::`[tensorflow::Input](/versions/r2.2/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` lr, ::`[tensorflow::Input](/versions/r2.2/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` l1, ::`[tensorflow::Input](/versions/r2.2/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` l2, ::`[tensorflow::Input](/versions/r2.2/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` global_step)` ||\n| [ResourceApplyAdagradDA](#classtensorflow_1_1ops_1_1_resource_apply_adagrad_d_a_1a474417dc322ba655ca92cc78dc5e04a9)`(const ::`[tensorflow::Scope](/versions/r2.2/api_docs/cc/class/tensorflow/scope#classtensorflow_1_1_scope)` & scope, ::`[tensorflow::Input](/versions/r2.2/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` var, ::`[tensorflow::Input](/versions/r2.2/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` gradient_accumulator, ::`[tensorflow::Input](/versions/r2.2/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` gradient_squared_accumulator, ::`[tensorflow::Input](/versions/r2.2/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` grad, ::`[tensorflow::Input](/versions/r2.2/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` lr, ::`[tensorflow::Input](/versions/r2.2/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` l1, ::`[tensorflow::Input](/versions/r2.2/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` l2, ::`[tensorflow::Input](/versions/r2.2/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` global_step, const `[ResourceApplyAdagradDA::Attrs](/versions/r2.2/api_docs/cc/struct/tensorflow/ops/resource-apply-adagrad-d-a/attrs#structtensorflow_1_1ops_1_1_resource_apply_adagrad_d_a_1_1_attrs)` & attrs)` ||\n\n| ### Public attributes ||\n|--------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------|\n| [operation](#classtensorflow_1_1ops_1_1_resource_apply_adagrad_d_a_1ae2393d65d399da864154e800ea95651b) | [Operation](/versions/r2.2/api_docs/cc/class/tensorflow/operation#classtensorflow_1_1_operation) |\n\n| ### Public functions ||\n|-----------------------------------------------------------------------------------------------------------------------------------------|---------|\n| [operator::tensorflow::Operation](#classtensorflow_1_1ops_1_1_resource_apply_adagrad_d_a_1a20c9b9c6ea4fe8279b0db67b71e767f7)`() const ` | ` ` ` ` |\n\n| ### Public static functions ||\n|-------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [UseLocking](#classtensorflow_1_1ops_1_1_resource_apply_adagrad_d_a_1ab3c666fb3af46dfde253bbd003df25b9)`(bool x)` | [Attrs](/versions/r2.2/api_docs/cc/struct/tensorflow/ops/resource-apply-adagrad-d-a/attrs#structtensorflow_1_1ops_1_1_resource_apply_adagrad_d_a_1_1_attrs) |\n\n| ### Structs ||\n|-------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [tensorflow::ops::ResourceApplyAdagradDA::Attrs](/versions/r2.2/api_docs/cc/struct/tensorflow/ops/resource-apply-adagrad-d-a/attrs) | Optional attribute setters for [ResourceApplyAdagradDA](/versions/r2.2/api_docs/cc/class/tensorflow/ops/resource-apply-adagrad-d-a#classtensorflow_1_1ops_1_1_resource_apply_adagrad_d_a). |\n\nPublic attributes\n-----------------\n\n### operation\n\n```text\nOperation operation\n``` \n\nPublic functions\n----------------\n\n### ResourceApplyAdagradDA\n\n```gdscript\n ResourceApplyAdagradDA(\n const ::tensorflow::Scope & scope,\n ::tensorflow::Input var,\n ::tensorflow::Input gradient_accumulator,\n ::tensorflow::Input gradient_squared_accumulator,\n ::tensorflow::Input grad,\n ::tensorflow::Input lr,\n ::tensorflow::Input l1,\n ::tensorflow::Input l2,\n ::tensorflow::Input global_step\n)\n``` \n\n### ResourceApplyAdagradDA\n\n```gdscript\n ResourceApplyAdagradDA(\n const ::tensorflow::Scope & scope,\n ::tensorflow::Input var,\n ::tensorflow::Input gradient_accumulator,\n ::tensorflow::Input gradient_squared_accumulator,\n ::tensorflow::Input grad,\n ::tensorflow::Input lr,\n ::tensorflow::Input l1,\n ::tensorflow::Input l2,\n ::tensorflow::Input global_step,\n const ResourceApplyAdagradDA::Attrs & attrs\n)\n``` \n\n### operator::tensorflow::Operation\n\n```gdscript\n operator::tensorflow::Operation() const \n``` \n\nPublic static functions\n-----------------------\n\n### UseLocking\n\n```text\nAttrs UseLocking(\n bool x\n)\n```"]]