tfmot.quantization.keras.default_8bit.Default8BitQuantizeLayoutTransform
Stay organized with collections
Save and categorize content based on your preferences.
Default model transformations.
Inherits From: QuantizeLayoutTransform
Methods
apply
View source
apply(
model, layer_quantize_map
)
Implement default 8-bit transforms.
Currently this means the following.
- Pull activations into layers, and apply fuse activations. (
- Modify range in incoming layers for Concat. (
- Fuse Conv2D/DepthwiseConv2D + BN into single layer.
Args |
model
|
Keras model to be quantized.
|
layer_quantize_map
|
Map with keys as layer names, and values as dicts
containing custom QuantizeConfig s which may have been passed with
layers.
|
Returns |
(Transformed Keras model to better match TensorFlow Lite backend, updated
layer quantize map.)
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2023-05-26 UTC.
[null,null,["Last updated 2023-05-26 UTC."],[],[],null,["# tfmot.quantization.keras.default_8bit.Default8BitQuantizeLayoutTransform\n\n\u003cbr /\u003e\n\n|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/model-optimization/blob/v0.7.5/tensorflow_model_optimization/python/core/quantization/keras/default_8bit/default_8bit_quantize_layout_transform.py#L30-L76) |\n\nDefault model transformations.\n\nInherits From: [`QuantizeLayoutTransform`](../../../../tfmot/quantization/keras/QuantizeLayoutTransform)\n\nMethods\n-------\n\n### `apply`\n\n[View source](https://github.com/tensorflow/model-optimization/blob/v0.7.5/tensorflow_model_optimization/python/core/quantization/keras/default_8bit/default_8bit_quantize_layout_transform.py#L34-L76) \n\n apply(\n model, layer_quantize_map\n )\n\nImplement default 8-bit transforms.\n\nCurrently this means the following.\n\n1. Pull activations into layers, and apply fuse activations. (\n2. Modify range in incoming layers for Concat. (\n3. Fuse Conv2D/DepthwiseConv2D + BN into single layer.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ||\n|----------------------|-------------------------------------------------------------------------------------------------------------------------------|\n| `model` | Keras model to be quantized. |\n| `layer_quantize_map` | Map with keys as layer names, and values as dicts containing custom `QuantizeConfig`s which may have been passed with layers. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| (Transformed Keras model to better match TensorFlow Lite backend, updated layer quantize map.) ||\n\n\u003cbr /\u003e"]]