开始使用 TensorFlow 模型优化
使用集合让一切井井有条
根据您的偏好保存内容并对其进行分类。
1. 为任务选择最佳模型
您需要根据任务在模型复杂性和大小之间进行权衡。如果您的任务需要高准确率,那么您可能需要一个大而复杂的模型。对于准确率要求较低的任务,最好使用较小的模型,因为它们不仅占用更少的磁盘空间和内存,而且通常速度更快且更节能。
2. 预优化的模型
查看现有的 TensorFlow Lite 预优化模型能否提供您的应用所需的效率。
3. 训练后工具
如果无法在您的应用中使用预训练模型,请尝试在 TensorFlow Lite 转换过程中使用 TensorFlow Lite 训练后量化工具,这些工具能够对已训练的 TensorFlow 模型进行优化。
请参阅训练后量化教程了解更多信息。
后续步骤:训练时工具
如果上述简单解决方案无法满足您的需求,您可能需要采用训练时优化技术。使用我们的训练时工具进行进一步优化和深入挖掘。
如未另行说明,那么本页面中的内容已根据知识共享署名 4.0 许可获得了许可,并且代码示例已根据 Apache 2.0 许可获得了许可。有关详情,请参阅 Google 开发者网站政策。Java 是 Oracle 和/或其关联公司的注册商标。
最后更新时间 (UTC):2021-08-16。
[null,null,["最后更新时间 (UTC):2021-08-16。"],[],[],null,["# Get started with TensorFlow model optimization\n\n\u003cbr /\u003e\n\n1. Choose the best model for the task\n-------------------------------------\n\nDepending on the task, you will need to make a tradeoff between model complexity\nand size. If your task requires high accuracy, then you may need a large and\ncomplex model. For tasks that require less precision, it is better to use a\nsmaller model because they not only use less disk space and memory, but they are\nalso generally faster and more energy efficient.\n\n2. Pre-optimized models\n-----------------------\n\nSee if any existing\n[TensorFlow Lite pre-optimized models](https://www.tensorflow.org/lite/models)\nprovide the efficiency required by your application.\n\n3. Post-training tooling\n------------------------\n\nIf you cannot use a pre-trained model for your application, try using\n[TensorFlow Lite post-training quantization tools](./quantization/post_training)\nduring [TensorFlow Lite conversion](https://www.tensorflow.org/lite/convert),\nwhich can optimize your already-trained TensorFlow model.\n\nSee the\n[post-training quantization tutorial](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/post_training_quant.ipynb)\nto learn more.\n\nNext steps: Training-time tooling\n---------------------------------\n\nIf the above simple solutions don't satisfy your needs, you may need to involve\ntraining-time optimization techniques.\n[Optimize further](/model_optimization/guide/optimize_further) with our training-time tools and dig\ndeeper."]]