训练后量化
使用集合让一切井井有条
根据您的偏好保存内容并对其进行分类。
训练后量化包括减少 CPU 和硬件加速器延迟、处理时间、功耗和模型大小而几乎不降低模型准确率的通用技术。这些技术可以在已经训练好的浮点 TensorFlow 模型上执行,并在 TensorFlow Lite 转换期间应用。这些技术在 TensorFlow Lite 转换器中以选项方式启用。
要查看端到端示例,请参阅以下教程:
量化权重
权重可能会转换为精度降低的类型,例如 16 位浮点数或 8 位整数。我们通常建议将 16 位浮点数用于 GPU 加速,而将 8 位整数用于 CPU 执行。
例如,下面给出了指定 8 位整数权重量化的方法:
import tensorflow as tf
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_quant_model = converter.convert()
推理时,最关键的密集部分使用 8 位而不是浮点数进行计算。与下面对权重和激活进行量化相比,存在一些推理时间性能开销。
有关详细信息,请参阅 TensorFlow Lite 训练后量化指南。
权重和激活的全整数量化
通过确保量化权重和激活,可以改善延迟、处理时间和功耗,并访问仅支持整数的硬件加速器。这需要一个较小的代表性数据集。
import tensorflow as tf
def representative_dataset_gen():
for _ in range(num_calibration_steps):
# Get sample input data as a numpy array in a method of your choosing.
yield [input]
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = representative_dataset_gen
tflite_quant_model = converter.convert()
为方便起见,生成的模型仍采用浮点输入和输出。
有关详细信息,请参阅 TensorFlow Lite 训练后量化指南。
如未另行说明,那么本页面中的内容已根据知识共享署名 4.0 许可获得了许可,并且代码示例已根据 Apache 2.0 许可获得了许可。有关详情,请参阅 Google 开发者网站政策。Java 是 Oracle 和/或其关联公司的注册商标。
最后更新时间 (UTC):2021-08-16。
[null,null,["最后更新时间 (UTC):2021-08-16。"],[],[],null,["# Post-training quantization\n\n\u003cbr /\u003e\n\nPost-training quantization includes general techniques to reduce CPU and\nhardware accelerator latency, processing, power, and model size with little\ndegradation in model accuracy. These techniques can be performed on an\nalready-trained float TensorFlow model and applied during TensorFlow Lite\nconversion. These techniques are enabled as options in the\n[TensorFlow Lite converter](https://www.tensorflow.org/lite/convert/).\n\nTo jump right into end-to-end examples, see the following tutorials:\n\n- [Post-training dynamic range\n quantization](https://www.tensorflow.org/lite/performance/post_training_quant)\n- [Post-training full integer\n quantization](https://www.tensorflow.org/lite/performance/post_training_integer_quant)\n- [Post-training float16\n quantization](https://www.tensorflow.org/lite/performance/post_training_float16_quant)\n\nQuantizing weights\n------------------\n\nWeights can be converted to types with reduced precision, such as 16 bit floats\nor 8 bit integers. We generally recommend 16-bit floats for GPU acceleration and\n8-bit integer for CPU execution.\n\nFor example, here is how to specify 8 bit integer weight quantization: \n\n import tensorflow as tf\n converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)\n converter.optimizations = [tf.lite.Optimize.DEFAULT]\n tflite_quant_model = converter.convert()\n\nAt inference, the most critically intensive parts are computed with 8 bits\ninstead of floating point. There is some inference-time performance overhead,\nrelative to quantizing both weights and activations below.\n\nFor more information, see the TensorFlow Lite\n[post-training quantization](https://www.tensorflow.org/lite/performance/post_training_quantization)\nguide.\n\nFull integer quantization of weights and activations\n----------------------------------------------------\n\nImprove latency, processing, and power usage, and get access to integer-only\nhardware accelerators by making sure both weights and activations are quantized.\nThis requires a small representative data set. \n\n import tensorflow as tf\n\n def representative_dataset_gen():\n for _ in range(num_calibration_steps):\n # Get sample input data as a numpy array in a method of your choosing.\n yield [input]\n\n converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)\n converter.optimizations = [tf.lite.Optimize.DEFAULT]\n converter.representative_dataset = representative_dataset_gen\n tflite_quant_model = converter.convert()\n\nThe resulting model will still take float input and output for convenience.\n\nFor more information, see the TensorFlow Lite\n[post-training quantization](https://www.tensorflow.org/lite/performance/post_training_quantization#full_integer_quantization_of_weights_and_activations)\nguide."]]