Perform quantized add of quantized Tensor `lhs` and quantized Tensor `rhs` to make quantized `output`.
Given quantized `lhs` and quantized `rhs`, performs quantized add on `lhs` and `rhs` to make quantized `output`.
`UniformQuantizedAdd` follows Numpy broadcasting rules. The two input array shapes are compared element-wise. Starting with the trailing dimensions, the two dimensions either have to be equal or one of them needs to be 1.
`lhs` and `rhs` must be quantized Tensor, where data value is quantized using the formula:
quantized_data = clip(original_data / scale + zero_point, quantization_min_val, quantization_max_val)
If `lhs` and `output` is both per-axis quantized, the quantization axis must match. Also, if `rhs` and `output` is both per-axis quantized, the quantization axis must match. Match means the axis must match when adding, regarding the broadcasting. i.e. For both operands `lhs` and `rhs`, if `operand.quantization_axis` >= 0 and `output.quantization_axis` >= 0, `operand.dims` - `operand.quantization_axis` must be equal to `output.dims` - `output.quantization_axis`.
Nested Classes
class | UniformQuantizedAdd.Options | Optional attributes for UniformQuantizedAdd
|
Public Methods
Output<T> |
asOutput()
Returns the symbolic handle of a tensor.
|
static <T> UniformQuantizedAdd<T> |
create(Scope scope, Operand<T> lhs, Operand<T> rhs, Operand<Float> lhsScales, Operand<Integer> lhsZeroPoints, Operand<Float> rhsScales, Operand<Integer> rhsZeroPoints, Operand<Float> outputScales, Operand<Integer> outputZeroPoints, Long lhsQuantizationMinVal, Long lhsQuantizationMaxVal, Long rhsQuantizationMinVal, Long rhsQuantizationMaxVal, Long outputQuantizationMinVal, Long outputQuantizationMaxVal, Options... options)
Factory method to create a class wrapping a new UniformQuantizedAdd operation.
|
static UniformQuantizedAdd.Options |
lhsQuantizationAxis(Long lhsQuantizationAxis)
|
Output<T> |
output()
The output quantized tensor.
|
static UniformQuantizedAdd.Options |
outputQuantizationAxis(Long outputQuantizationAxis)
|
static UniformQuantizedAdd.Options |
rhsQuantizationAxis(Long rhsQuantizationAxis)
|
Inherited Methods
Public Methods
public Output<T> asOutput ()
Returns the symbolic handle of a tensor.
Inputs to TensorFlow operations are outputs of another TensorFlow operation. This method is used to obtain a symbolic handle that represents the computation of the input.
public static UniformQuantizedAdd<T> create (Scope scope, Operand<T> lhs, Operand<T> rhs, Operand<Float> lhsScales, Operand<Integer> lhsZeroPoints, Operand<Float> rhsScales, Operand<Integer> rhsZeroPoints, Operand<Float> outputScales, Operand<Integer> outputZeroPoints, Long lhsQuantizationMinVal, Long lhsQuantizationMaxVal, Long rhsQuantizationMinVal, Long rhsQuantizationMaxVal, Long outputQuantizationMinVal, Long outputQuantizationMaxVal, Options... options)
Factory method to create a class wrapping a new UniformQuantizedAdd operation.
Parameters
scope | current scope |
---|---|
lhs | Must be a quantized tensor. |
rhs | Must be a quantized tensor. |
lhsScales | The float value(s) used as scale factors when quantizing the original data that `lhs` represents. |
lhsZeroPoints | The int32 value(s) used as zero points when quantizing original data that `lhs` represents. Must have same shape with `lhs_scales`. |
rhsScales | The float value(s) used as scale factors when quantizing the original data that `rhs` represents. |
rhsZeroPoints | The int32 value(s) used as zero points when quantizing original data that `rhs` represents. Must have same shape with `rhs_scales`. |
outputScales | The float value(s) to use as scale factors when quantizing original data that `output` represents. |
outputZeroPoints | The int32 value(s) used as zero points when quantizing original data that output represents. Must have same shape with `output_scales`. |
lhsQuantizationMinVal | The min value of the quantized data stored in `lhs`. For example, if `Tin` is `qint8`, this must be set to -127 if narrow range quantized or -128 if not. |
lhsQuantizationMaxVal | The max value of the quantized data stored in `lhs`. For example, if `Tin` is `qint8`, this must be set to 127. |
rhsQuantizationMinVal | The min value of the quantized data stored in `rhs`. For example, if `Tin` is `qint8`, this must be set to -127 if narrow range quantized or -128 if not. |
rhsQuantizationMaxVal | The max value of the quantized data stored in `rhs`. For example, if `Tin` is `qint8`, this must be set to 127. |
outputQuantizationMinVal | The min value of the quantized data stored in `output`. For example, if `Tout` is `qint8`, this must be set to -127 if narrow range quantized or -128 if not. |
outputQuantizationMaxVal | The max value of the quantized data stored in `output`. For example, if `Tout` is `qint8`, this must be set to 127. |
options | carries optional attributes values |
Returns
- a new instance of UniformQuantizedAdd
public static UniformQuantizedAdd.Options lhsQuantizationAxis (Long lhsQuantizationAxis)
Parameters
lhsQuantizationAxis | Indicates the dimension index of the tensor where per-axis quantization is applied for the slices along that dimension. If set to -1 (default), this indicates per-tensor quantization. For the `lhs`, only per-tensor quantization is supported. Thus, this must be set to -1. Other values will raise error at OpKernel construction. |
---|
public static UniformQuantizedAdd.Options outputQuantizationAxis (Long outputQuantizationAxis)
Parameters
outputQuantizationAxis | Indicates the dimension index of the tensor where per-axis quantization is applied for the slices along that dimension. If set to -1 (default), this indicates per-tensor quantization. For the `output`, only per-tensor quantization or per-channel quantization along `output_feature_dimension` is supported. Thus, this must be set to -1 or `dimension_numbers.output_feature_dimension`. Other values will raise error at OpKernel construction. |
---|
public static UniformQuantizedAdd.Options rhsQuantizationAxis (Long rhsQuantizationAxis)
Parameters
rhsQuantizationAxis | Indicates the dimension index of the tensor where per-axis quantization is applied for the slices along that dimension. If set to -1 (default), this indicates per-tensor quantization. For the `rhs`, only per-tensor quantization or per-channel quantization along `kernel_output_feature_dimension` is supported. Thus, this must be set to -1 or `dimension_numbers.kernel_output_feature_dimension`. Other values will raise error at OpKernel construction. |
---|