Training Ops

Summary

Classes

tensorflow:: ops:: ApplyAdadelta

Update '*var' according to the adadelta scheme.

tensorflow:: ops:: ApplyAdagrad

Update '*var' according to the adagrad scheme.

tensorflow:: ops:: ApplyAdagradDA

Update '*var' according to the proximal adagrad scheme.

tensorflow:: ops:: ApplyAdam

Update '*var' according to the Adam algorithm.

tensorflow:: ops:: ApplyAddSign

Update '*var' according to the AddSign update.

tensorflow:: ops:: ApplyCenteredRMSProp

Update '*var' according to the centered RMSProp algorithm.

tensorflow:: ops:: ApplyFtrl

Update '*var' according to the Ftrl-proximal scheme.

tensorflow:: ops:: ApplyFtrlV2

Update '*var' according to the Ftrl-proximal scheme.

tensorflow:: ops:: ApplyGradientDescent

Update '*var' by subtracting 'alpha' * 'delta' from it.

tensorflow:: ops:: ApplyMomentum

Update '*var' according to the momentum scheme.

tensorflow:: ops:: ApplyPowerSign

Update '*var' according to the AddSign update.

tensorflow:: ops:: ApplyProximalAdagrad

Update '*var' and '*accum' according to FOBOS with Adagrad learning rate.

tensorflow:: ops:: ApplyProximalGradientDescent

Update '*var' as FOBOS algorithm with fixed learning rate.

tensorflow:: ops:: ApplyRMSProp

Update '*var' according to the RMSProp algorithm.

tensorflow:: ops:: ResourceApplyAdadelta

Update '*var' according to the adadelta scheme.

tensorflow:: ops:: ResourceApplyAdagrad

Update '*var' according to the adagrad scheme.

tensorflow:: ops:: ResourceApplyAdagradDA

Update '*var' according to the proximal adagrad scheme.

tensorflow:: ops:: ResourceApplyAdam

Update '*var' according to the Adam algorithm.

tensorflow:: ops:: ResourceApplyAdamWithAmsgrad

Update '*var' according to the Adam algorithm.

tensorflow:: ops:: ResourceApplyAddSign

Update '*var' according to the AddSign update.

tensorflow:: ops:: ResourceApplyCenteredRMSProp

Update '*var' according to the centered RMSProp algorithm.

tensorflow:: ops:: ResourceApplyFtrl

Update '*var' according to the Ftrl-proximal scheme.

tensorflow:: ops:: ResourceApplyFtrlV2

Update '*var' according to the Ftrl-proximal scheme.

tensorflow:: ops:: ResourceApplyGradientDescent

Update '*var' by subtracting 'alpha' * 'delta' from it.

tensorflow:: ops:: ResourceApplyKerasMomentum

Update '*var' according to the momentum scheme.

tensorflow:: ops:: ResourceApplyMomentum

Update '*var' according to the momentum scheme.

tensorflow:: ops:: ResourceApplyPowerSign

Update '*var' according to the AddSign update.

tensorflow:: ops:: ResourceApplyProximalAdagrad

Update '*var' and '*accum' according to FOBOS with Adagrad learning rate.

tensorflow:: ops:: ResourceApplyProximalGradientDescent

Update '*var' as FOBOS algorithm with fixed learning rate.

tensorflow:: ops:: ResourceApplyRMSProp

Update '*var' according to the RMSProp algorithm.

tensorflow:: ops:: ResourceSparseApplyAdadelta

var: Should be from a Variable().

tensorflow:: ops:: ResourceSparseApplyAdagrad

Update relevant entries in '*var' and '*accum' according to the adagrad scheme.

tensorflow:: ops:: ResourceSparseApplyAdagradDA

Update entries in '*var' and '*accum' according to the proximal adagrad scheme.

tensorflow:: ops:: ResourceSparseApplyCenteredRMSProp

Update '*var' according to the centered RMSProp algorithm.

tensorflow:: ops:: ResourceSparseApplyFtrl

Update relevant entries in '*var' according to the Ftrl-proximal scheme.

tensorflow:: ops:: ResourceSparseApplyFtrlV2

Update relevant entries in '*var' according to the Ftrl-proximal scheme.

tensorflow:: ops:: ResourceSparseApplyKerasMomentum

Update relevant entries in '*var' and '*accum' according to the momentum scheme.

tensorflow:: ops:: ResourceSparseApplyMomentum

Update relevant entries in '*var' and '*accum' according to the momentum scheme.

tensorflow:: ops:: ResourceSparseApplyProximalAdagrad

Sparse update entries in '*var' and '*accum' according to FOBOS algorithm.

tensorflow:: ops:: ResourceSparseApplyProximalGradientDescent

Sparse update '*var' as FOBOS algorithm with fixed learning rate.

tensorflow:: ops:: ResourceSparseApplyRMSProp

Update '*var' according to the RMSProp algorithm.

tensorflow:: ops:: SparseApplyAdadelta

var: Should be from a Variable().

tensorflow:: ops:: SparseApplyAdagrad

Update relevant entries in '*var' and '*accum' according to the adagrad scheme.

tensorflow:: ops:: SparseApplyAdagradDA

Update entries in '*var' and '*accum' according to the proximal adagrad scheme.

tensorflow:: ops:: SparseApplyCenteredRMSProp

Update '*var' according to the centered RMSProp algorithm.

tensorflow:: ops:: SparseApplyFtrl

Update relevant entries in '*var' according to the Ftrl-proximal scheme.

tensorflow:: ops:: SparseApplyFtrlV2

Update relevant entries in '*var' according to the Ftrl-proximal scheme.

tensorflow:: ops:: SparseApplyMomentum

Update relevant entries in '*var' and '*accum' according to the momentum scheme.

tensorflow:: ops:: SparseApplyProximalAdagrad

Sparse update entries in '*var' and '*accum' according to FOBOS algorithm.

tensorflow:: ops:: SparseApplyProximalGradientDescent

Sparse update '*var' as FOBOS algorithm with fixed learning rate.

tensorflow:: ops:: SparseApplyRMSProp

Update '*var' according to the RMSProp algorithm.