tf.contrib.quantize.experimental_create_training_graph
Stay organized with collections
Save and categorize content based on your preferences.
Rewrites a training input_graph in place for simulated quantization.
tf.contrib.quantize.experimental_create_training_graph(
input_graph=None, weight_bits=8, activation_bits=8, symmetric=False,
quant_delay=0, freeze_bn_delay=None, scope=None
)
This function must be invoked prior to insertion of gradient ops in a graph
as quantization should be modeled in both forward and backward passes.
Variables added by the rewrite get added to the global variables collection.
This function has additional experimental options not (yet) available to
create_training_graph. The resulting behavior may be undefined.
The graph has fake quantization ops inserted to simulate the error
introduced by quantization. Since the graph is transformed in place,
the expected behavior of previously held references to nodes and tensors may
change.
The default value of quant_delay is suitable for finetuning an already trained
floating point model (recommended).
If one wants to train a quantized model from scratch, quant_delay should be
set to the number of steps it take the floating point model to converge.
Quantization will be activated at this point and effectively finetune the
model. If quant_delay is not provided when training from scratch, training can
often fail.
Args |
input_graph
|
The tf.Graph to be transformed, if None then defaults to the
default graph.
|
weight_bits
|
Number of bits to use for quantizing weights.
|
activation_bits
|
Number of bits to use for quantizing activations.
|
symmetric
|
If true, use symmetric quantization limits instead of training
the minimum and maximum of each quantization range separately.
|
quant_delay
|
Number of steps after which weights and activations are
quantized during training.
|
freeze_bn_delay
|
Number of steps after which moving mean and variance are
frozen and used instead of batch statistics during training.
freeze_bn_delay should be greater than quant_delay and should correspond
to when training has almost converged
|
scope
|
The scope to be transformed. If it's not None, only the ops which
are in this scope will be transformed.
|
Raises |
ValueError
|
If elements contains an element that isn't a tf.Tensor or
tf.Operation.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2020-10-01 UTC.
[null,null,["Last updated 2020-10-01 UTC."],[],[],null,["# tf.contrib.quantize.experimental_create_training_graph\n\n\u003cbr /\u003e\n\n|-----------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/contrib/quantize/python/quantize_graph.py#L146-L205) |\n\nRewrites a training input_graph in place for simulated quantization. \n\n tf.contrib.quantize.experimental_create_training_graph(\n input_graph=None, weight_bits=8, activation_bits=8, symmetric=False,\n quant_delay=0, freeze_bn_delay=None, scope=None\n )\n\nThis function must be invoked prior to insertion of gradient ops in a graph\nas quantization should be modeled in both forward and backward passes.\n\nVariables added by the rewrite get added to the global variables collection.\n\nThis function has additional experimental options not (yet) available to\ncreate_training_graph. The resulting behavior may be undefined.\n\nThe graph has fake quantization ops inserted to simulate the error\nintroduced by quantization. Since the graph is transformed in place,\nthe expected behavior of previously held references to nodes and tensors may\nchange.\n\nThe default value of quant_delay is suitable for finetuning an already trained\nfloating point model (recommended).\nIf one wants to train a quantized model from scratch, quant_delay should be\nset to the number of steps it take the floating point model to converge.\nQuantization will be activated at this point and effectively finetune the\nmodel. If quant_delay is not provided when training from scratch, training can\noften fail.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `input_graph` | The tf.Graph to be transformed, if None then defaults to the default graph. |\n| `weight_bits` | Number of bits to use for quantizing weights. |\n| `activation_bits` | Number of bits to use for quantizing activations. |\n| `symmetric` | If true, use symmetric quantization limits instead of training the minimum and maximum of each quantization range separately. |\n| `quant_delay` | Number of steps after which weights and activations are quantized during training. |\n| `freeze_bn_delay` | Number of steps after which moving mean and variance are frozen and used instead of batch statistics during training. freeze_bn_delay should be greater than quant_delay and should correspond to when training has almost converged |\n| `scope` | The scope to be transformed. If it's not None, only the ops which are in this scope will be transformed. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|--------------|-------------------------------------------------------------------------|\n| `ValueError` | If elements contains an element that isn't a tf.Tensor or tf.Operation. |\n\n\u003cbr /\u003e"]]