Fake-quantize the 'inputs' tensor, type float to 'outputs' tensor of same type.
tf.raw_ops.FakeQuantWithMinMaxArgs(
inputs, min=-6, max=6, num_bits=8, narrow_range=False, name=None
)
Attributes
[min; max]define the clamping range for theinputsdata.inputsvalues are quantized into the quantization range ([0; 2^num_bits - 1]whennarrow_rangeis false and[1; 2^num_bits - 1]when it is true) and then de-quantized and output as floats in[min; max]interval.num_bitsis the bitwidth of the quantization; between 2 and 16, inclusive.
Before quantization, min and max values are adjusted with the following
logic.
It is suggested to have min <= 0 <= max. If 0 is not in the range of values,
the behavior can be unexpected:
- If
0 < min < max:min_adj = 0andmax_adj = max - min. - If
min < max < 0:min_adj = min - maxandmax_adj = 0. - If
min <= 0 <= max:scale = (max - min) / (2^num_bits - 1),min_adj = scale * round(min / scale)andmax_adj = max + min_adj - min.
Quantization is called fake since the output is still in floating point.
Returns | |
|---|---|
A Tensor of type float32.
|