tfmot.quantization.keras.quantize_model
Stay organized with collections
Save and categorize content based on your preferences.
Quantize a tf.keras
model with the default quantization implementation.
tfmot.quantization.keras.quantize_model(
to_quantize, quantized_layer_name_prefix='quant_'
)
Used in the notebooks
Quantization constructs a model which emulates quantization during training.
This allows the model to learn parameters robust to quantization loss, and
also model the accuracy of a quantized model.
For more information, see
https://www.tensorflow.org/model_optimization/guide/quantization/training
Quantize a model:
# Quantize sequential model
model = quantize_model(
keras.Sequential([
layers.Dense(10, activation='relu', input_shape=(100,)),
layers.Dense(2, activation='sigmoid')
]))
# Quantize functional model
in = tf.keras.Input((3,))
out = tf.keras.Dense(2)(in)
model = tf.keras.Model(in, out)
quantized_model = quantize_model(model)
Note that this function removes the optimizer from the original model.
The returned model copies over weights from the original model. So while
it preserves the original weights, training it will not modify the weights
of the original model.
Args |
to_quantize
|
tf.keras model to be quantized. It can have pre-trained
weights.
|
quantized_layer_name_prefix
|
Name prefix for the quantized layers. The
default is quant_ .
|
Returns |
Returns a new tf.keras model prepared for quantization.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2023-05-26 UTC.
[null,null,["Last updated 2023-05-26 UTC."],[],[],null,["# tfmot.quantization.keras.quantize_model\n\n\u003cbr /\u003e\n\n|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/model-optimization/blob/v0.7.5/tensorflow_model_optimization/python/core/quantization/keras/quantize.py#L84-L148) |\n\nQuantize a [`tf.keras`](https://www.tensorflow.org/api_docs/python/tf/keras) model with the default quantization implementation. \n\n tfmot.quantization.keras.quantize_model(\n to_quantize, quantized_layer_name_prefix='quant_'\n )\n\n### Used in the notebooks\n\n| Used in the guide |\n|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| - [Quantization aware training comprehensive guide](https://www.tensorflow.org/model_optimization/guide/quantization/training_comprehensive_guide) - [Cluster preserving quantization aware training (CQAT) Keras example](https://www.tensorflow.org/model_optimization/guide/combine/cqat_example) - [Sparsity and cluster preserving quantization aware training (PCQAT) Keras example](https://www.tensorflow.org/model_optimization/guide/combine/pcqat_example) - [Pruning preserving quantization aware training (PQAT) Keras example](https://www.tensorflow.org/model_optimization/guide/combine/pqat_example) |\n\nQuantization constructs a model which emulates quantization during training.\nThis allows the model to learn parameters robust to quantization loss, and\nalso model the accuracy of a quantized model.\n\nFor more information, see\n\u003chttps://www.tensorflow.org/model_optimization/guide/quantization/training\u003e\n\n#### Quantize a model:\n\n # Quantize sequential model\n model = quantize_model(\n keras.Sequential([\n layers.Dense(10, activation='relu', input_shape=(100,)),\n layers.Dense(2, activation='sigmoid')\n ]))\n\n # Quantize functional model\n in = tf.keras.Input((3,))\n out = tf.keras.Dense(2)(in)\n model = tf.keras.Model(in, out)\n\n quantized_model = quantize_model(model)\n\nNote that this function removes the optimizer from the original model.\n\nThe returned model copies over weights from the original model. So while\nit preserves the original weights, training it will not modify the weights\nof the original model.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-------------------------------|------------------------------------------------------------------|\n| `to_quantize` | tf.keras model to be quantized. It can have pre-trained weights. |\n| `quantized_layer_name_prefix` | Name prefix for the quantized layers. The default is `quant_`. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| Returns a new [`tf.keras`](https://www.tensorflow.org/api_docs/python/tf/keras) model prepared for quantization. ||\n\n\u003cbr /\u003e"]]