tf.raw_ops.FakeQuantWithMinMaxArgs
Stay organized with collections
Save and categorize content based on your preferences.
Fake-quantize the 'inputs' tensor, type float to 'outputs' tensor of same type.
tf.raw_ops.FakeQuantWithMinMaxArgs(
inputs, min=-6, max=6, num_bits=8, narrow_range=False, name=None
)
Attributes
[min; max]
define the clamping range for the inputs
data.
inputs
values are quantized into the quantization range (
[0; 2^num_bits - 1]
when narrow_range
is false and [1; 2^num_bits - 1]
when it is true) and then de-quantized and output as floats in [min; max]
interval.
num_bits
is the bitwidth of the quantization; between 2 and 16, inclusive.
Before quantization, min
and max
values are adjusted with the following
logic.
It is suggested to have min <= 0 <= max
. If 0
is not in the range of values,
the behavior can be unexpected:
- If
0 < min < max
: min_adj = 0
and max_adj = max - min
.
- If
min < max < 0
: min_adj = min - max
and max_adj = 0
.
- If
min <= 0 <= max
: scale = (max - min) / (2^num_bits - 1)
,
min_adj = scale * round(min / scale)
and max_adj = max + min_adj - min
.
Quantization is called fake since the output is still in floating point.
Args |
inputs
|
A Tensor of type float32 .
|
min
|
An optional float . Defaults to -6 .
|
max
|
An optional float . Defaults to 6 .
|
num_bits
|
An optional int . Defaults to 8 .
|
narrow_range
|
An optional bool . Defaults to False .
|
name
|
A name for the operation (optional).
|
Returns |
A Tensor of type float32 .
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2021-02-18 UTC.
[null,null,["Last updated 2021-02-18 UTC."],[],[],null,["# tf.raw_ops.FakeQuantWithMinMaxArgs\n\n\u003cbr /\u003e\n\nFake-quantize the 'inputs' tensor, type float to 'outputs' tensor of same type.\n\n#### View aliases\n\n\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n[`tf.compat.v1.raw_ops.FakeQuantWithMinMaxArgs`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/FakeQuantWithMinMaxArgs)\n\n\u003cbr /\u003e\n\n tf.raw_ops.FakeQuantWithMinMaxArgs(\n inputs, min=-6, max=6, num_bits=8, narrow_range=False, name=None\n )\n\nAttributes\n\n- `[min; max]` define the clamping range for the `inputs` data.\n- `inputs` values are quantized into the quantization range ( `[0; 2^num_bits - 1]` when `narrow_range` is false and `[1; 2^num_bits - 1]` when it is true) and then de-quantized and output as floats in `[min; max]` interval.\n- `num_bits` is the bitwidth of the quantization; between 2 and 16, inclusive.\n\nBefore quantization, `min` and `max` values are adjusted with the following\nlogic.\nIt is suggested to have `min \u003c= 0 \u003c= max`. If `0` is not in the range of values,\nthe behavior can be unexpected:\n\n- If `0 \u003c min \u003c max`: `min_adj = 0` and `max_adj = max - min`.\n- If `min \u003c max \u003c 0`: `min_adj = min - max` and `max_adj = 0`.\n- If `min \u003c= 0 \u003c= max`: `scale = (max - min) / (2^num_bits - 1)`, `min_adj = scale * round(min / scale)` and `max_adj = max + min_adj - min`.\n\nQuantization is called fake since the output is still in floating point.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|----------------|------------------------------------------|\n| `inputs` | A `Tensor` of type `float32`. |\n| `min` | An optional `float`. Defaults to `-6`. |\n| `max` | An optional `float`. Defaults to `6`. |\n| `num_bits` | An optional `int`. Defaults to `8`. |\n| `narrow_range` | An optional `bool`. Defaults to `False`. |\n| `name` | A name for the operation (optional). |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| A `Tensor` of type `float32`. ||\n\n\u003cbr /\u003e"]]