tfmot.experimental.combine.Default8BitClusterPreserveQuantizeScheme
Stay organized with collections
Save and categorize content based on your preferences.
Default 8 bit Cluster Preserve Quantization Scheme.
Inherits From: Default8BitQuantizeScheme
, QuantizeScheme
tfmot.experimental.combine.Default8BitClusterPreserveQuantizeScheme(
preserve_sparsity=True
)
Used in the notebooks
Args |
preserve_sparsity
|
the flag to enable prune-cluster-preserving QAT.
|
Methods
View source
get_layout_transformer()
Returns the layout transforms for this scheme.
Returns |
Returns the QuantizeLayoutTransform for this quantization scheme.
|
get_quantize_registry
View source
get_quantize_registry()
Returns the quantization registry for this scheme.
Returns |
Returns the QuantizeRegistry for this quantization scheme.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2023-05-26 UTC.
[null,null,["Last updated 2023-05-26 UTC."],[],[],null,["# tfmot.experimental.combine.Default8BitClusterPreserveQuantizeScheme\n\n\u003cbr /\u003e\n\n|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/model-optimization/blob/v0.7.5/tensorflow_model_optimization/python/core/quantization/keras/collab_opts/cluster_preserve/default_8bit_cluster_preserve_quantize_scheme.py#L23-L37) |\n\nDefault 8 bit Cluster Preserve Quantization Scheme.\n\nInherits From: [`Default8BitQuantizeScheme`](../../../tfmot/quantization/keras/default_8bit/Default8BitQuantizeScheme), [`QuantizeScheme`](../../../tfmot/quantization/keras/QuantizeScheme) \n\n tfmot.experimental.combine.Default8BitClusterPreserveQuantizeScheme(\n preserve_sparsity=True\n )\n\n### Used in the notebooks\n\n| Used in the guide |\n|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| - [Cluster preserving quantization aware training (CQAT) Keras example](https://www.tensorflow.org/model_optimization/guide/combine/cqat_example) - [Sparsity and cluster preserving quantization aware training (PCQAT) Keras example](https://www.tensorflow.org/model_optimization/guide/combine/pcqat_example) |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|---------------------|--------------------------------------------------|\n| `preserve_sparsity` | the flag to enable prune-cluster-preserving QAT. |\n\n\u003cbr /\u003e\n\nMethods\n-------\n\n### `get_layout_transformer`\n\n[View source](https://github.com/tensorflow/model-optimization/blob/v0.7.5/tensorflow_model_optimization/python/core/quantization/keras/default_8bit/default_8bit_quantize_scheme.py#L28-L30) \n\n get_layout_transformer()\n\nReturns the layout transforms for this scheme.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| Returns the QuantizeLayoutTransform for this quantization scheme. ||\n\n\u003cbr /\u003e\n\n### `get_quantize_registry`\n\n[View source](https://github.com/tensorflow/model-optimization/blob/v0.7.5/tensorflow_model_optimization/python/core/quantization/keras/collab_opts/cluster_preserve/default_8bit_cluster_preserve_quantize_scheme.py#L35-L37) \n\n get_quantize_registry()\n\nReturns the quantization registry for this scheme.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| Returns the QuantizeRegistry for this quantization scheme. ||\n\n\u003cbr /\u003e"]]