tfmot.quantization.keras.collab_opts.Default8BitPrunePreserveQuantizeScheme
Stay organized with collections
Save and categorize content based on your preferences.
Default 8 bit Prune Preserve Quantization Scheme.
Inherits From: Default8BitQuantizeScheme
, QuantizeScheme
tfmot.quantization.keras.collab_opts.Default8BitPrunePreserveQuantizeScheme(
disable_per_axis=False
)
Used in the notebooks
Methods
View source
get_layout_transformer()
Returns the layout transforms for this scheme.
Returns |
Returns the QuantizeLayoutTransform for this quantization scheme.
|
get_quantize_registry
View source
get_quantize_registry()
Returns the quantization registry for this scheme.
Returns |
Returns the QuantizeRegistry for this quantization scheme.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2023-05-26 UTC.
[null,null,["Last updated 2023-05-26 UTC."],[],[],null,["# tfmot.quantization.keras.collab_opts.Default8BitPrunePreserveQuantizeScheme\n\n\u003cbr /\u003e\n\n|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/model-optimization/blob/v0.7.5/tensorflow_model_optimization/python/core/quantization/keras/collab_opts/prune_preserve/default_8bit_prune_preserve_quantize_scheme.py#L23-L29) |\n\nDefault 8 bit Prune Preserve Quantization Scheme.\n\nInherits From: [`Default8BitQuantizeScheme`](../../../../tfmot/quantization/keras/default_8bit/Default8BitQuantizeScheme), [`QuantizeScheme`](../../../../tfmot/quantization/keras/QuantizeScheme)\n\n#### View aliases\n\n\n**Main aliases**\n\n[`tfmot.experimental.combine.Default8BitPrunePreserveQuantizeScheme`](https://www.tensorflow.org/model_optimization/api_docs/python/tfmot/quantization/keras/collab_opts/Default8BitPrunePreserveQuantizeScheme)\n\n\u003cbr /\u003e\n\n tfmot.quantization.keras.collab_opts.Default8BitPrunePreserveQuantizeScheme(\n disable_per_axis=False\n )\n\n### Used in the notebooks\n\n| Used in the guide |\n|---------------------------------------------------------------------------------------------------------------------------------------------------|\n| - [Pruning preserving quantization aware training (PQAT) Keras example](https://www.tensorflow.org/model_optimization/guide/combine/pqat_example) |\n\nMethods\n-------\n\n### `get_layout_transformer`\n\n[View source](https://github.com/tensorflow/model-optimization/blob/v0.7.5/tensorflow_model_optimization/python/core/quantization/keras/default_8bit/default_8bit_quantize_scheme.py#L28-L30) \n\n get_layout_transformer()\n\nReturns the layout transforms for this scheme.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| Returns the QuantizeLayoutTransform for this quantization scheme. ||\n\n\u003cbr /\u003e\n\n### `get_quantize_registry`\n\n[View source](https://github.com/tensorflow/model-optimization/blob/v0.7.5/tensorflow_model_optimization/python/core/quantization/keras/collab_opts/prune_preserve/default_8bit_prune_preserve_quantize_scheme.py#L27-L29) \n\n get_quantize_registry()\n\nReturns the quantization registry for this scheme.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| Returns the QuantizeRegistry for this quantization scheme. ||\n\n\u003cbr /\u003e"]]