|  View source on GitHub | 
Quantize a tf.keras model that has been annotated for quantization.
tfmot.quantization.keras.quantize_apply(
    model,
    scheme=default_8bit_quantize_scheme.Default8BitQuantizeScheme(),
    quantized_layer_name_prefix='quant_'
)
Used in the notebooks
| Used in the guide | 
|---|
Quantization constructs a model which emulates quantization during training. This allows the model to learn parameters robust to quantization loss, and also model the accuracy of a quantized model.
For more information, see https://www.tensorflow.org/model_optimization/guide/quantization/training
This function takes a tf.keras model in which the desired layers for
quantization have already been annotated. See quantize_annotate_model
and quantize_annotate_layer.
Example:
model = keras.Sequential([
    layers.Dense(10, activation='relu', input_shape=(100,)),
    quantize_annotate_layer(layers.Dense(2, activation='sigmoid'))
])
# Only the second Dense layer is quantized.
quantized_model = quantize_apply(model)
Note that this function removes the optimizer from the original model.
The returned model copies over weights from the original model. So while it preserves the original weights, training it will not modify the weights of the original model.
| Args | |
|---|---|
| model | A tf.kerasSequential or Functional model which has been annotated
withquantize_annotate. It can have pre-trained weights. | 
| scheme | A QuantizeSchemewhich specifies transformer and quantization
registry. The default isDefault8BitQuantizeScheme(). | 
| quantized_layer_name_prefix | A name prefix for quantized layers. The default
is quant_. | 
| Returns | |
|---|---|
| Returns a new tf.kerasmodel in which the annotated layers have been
prepared for quantization. |