ML Community Day is November 9! Join us for updates from TensorFlow, JAX, and more Learn more

Module: tf.compat.v1.raw_ops

Public API for tf.raw_ops namespace.

Functions

Abort(...): Raise a exception to abort the process when called.

Abs(...): Computes the absolute value of a tensor.

AccumulateNV2(...): Returns the element-wise sum of a list of tensors.

AccumulatorApplyGradient(...): Applies a gradient to a given accumulator.

AccumulatorNumAccumulated(...): Returns the number of gradients aggregated in the given accumulators.

AccumulatorSetGlobalStep(...): Updates the accumulator with a new value for global_step.

AccumulatorTakeGradient(...): Extracts the average gradient in the given ConditionalAccumulator.

Acos(...): Computes acos of x element-wise.

Acosh(...): Computes inverse hyperbolic cosine of x element-wise.

Add(...): Returns x + y element-wise.

AddManySparseToTensorsMap(...): Add an N-minibatch SparseTensor to a SparseTensorsMap, return N handles.

AddN(...): Add all input tensors element wise.

AddSparseToTensorsMap(...): Add a SparseTensor to a SparseTensorsMap return its handle.

AddV2(...): Returns x + y element-wise.

AdjustContrast(...): Deprecated. Disallowed in GraphDef version >= 2.

AdjustContrastv2(...): Adjust the contrast of one or more images.

AdjustHue(...): Adjust the hue of one or more images.

AdjustSaturation(...): Adjust the saturation of one or more images.

All(...): Computes the "logical and" of elements across dimensions of a tensor.

AllCandidateSampler(...): Generates labels for candidate sampling with a learned unigram distribution.

AllToAll(...): An Op to exchange data across TPU replicas.

Angle(...): Returns the argument of a complex number.

AnonymousIterator(...): A container for an iterator resource.

AnonymousIteratorV2(...): A container for an iterator resource.

AnonymousMemoryCache(...)

AnonymousMultiDeviceIterator(...): A container for a multi device iterator resource.

AnonymousRandomSeedGenerator(...)

AnonymousSeedGenerator(...)

Any(...): Computes the "logical or" of elements across dimensions of a tensor.

ApplyAdaMax(...): Update '*var' according to the AdaMax algorithm.

ApplyAdadelta(...): Update '*var' according to the adadelta scheme.

ApplyAdagrad(...): Update '*var' according to the adagrad scheme.

ApplyAdagradDA(...): Update '*var' according to the proximal adagrad scheme.

ApplyAdagradV2(...): Update '*var' according to the adagrad scheme.

ApplyAdam(...): Update '*var' according to the Adam algorithm.

ApplyAddSign(...): Update '*var' according to the AddSign update.

ApplyCenteredRMSProp(...): Update '*var' according to the centered RMSProp algorithm.

ApplyFtrl(...): Update '*var' according to the Ftrl-proximal scheme.

ApplyFtrlV2(...): Update '*var' according to the Ftrl-proximal scheme.

ApplyGradientDescent(...): Update '*var' by subt