Join the SIG TFX-Addons community and help make TFX even better!

TensorFlow Extended (TFX) 是一个端到端平台,用于部署生产型机器学习流水线

当您准备好将模型从研究状态切换到生产状态时,可以使用 TFX 创建和管理生产流水线。

运行 Colab

此互动式教程简要介绍了 TFX 的各个内置组件。


教程将通过完整的端到端示例向您展示如何使用 TFX。


指南介绍了 TFX 的概念和组件。


TFX 流水线是实现机器学习流水线的一系列组件,专门用于可扩容的高性能机器学习任务。这些组件使用 TFX 库构建而成,您也可以单独使用这些组件。



训练模型并使用 TensorFlow Serving 应用 TensorFlow 模型

此指南会训练一个对服饰(例如运动鞋和衬衫)图像进行分类的神经网络模型,保存训练过的模型,然后使用 TensorFlow Serving 应用此模型。重点是 TensorFlow Serving,而不是在 TensorFlow 中进行建模和训练。

创建托管于 Google Cloud 之上的 TFX 流水线

本教程介绍了如何使用 TensorFlow Extended (TFX) 和 Cloud AI Platform Pipelines 在 Google Cloud 上创建您自己的机器学习流水线。您将遵循典型的机器学习开发流程,即从检查数据集开始,最后得到一个完整且有效的流水线。

结合使用 TFX 和 TensorFlow Lite,提高在设备上进行推断的效率

了解 TensorFlow Extended (TFX) 如何创建和评估将部署到设备上的机器学习模型。TFX 现已提供对 TFLite 的原生支持,因此提高了在移动设备上进行推断的效率。


欢迎查看我们的博客YouTube 播放列表获取其他 TFX 内容,
并订阅我们的 TensorFlow 每月简报,

May 19, 2021  
Check out the TFX 1.0 stable release

Many partners and developers have contributed to the project, and now TFX 1.0 is here! In addition to support for NLP, mobile and web applications, the new release provides stable public APIs and artifacts for TFX OSS users.

May 19, 2021  
Does your app use ML? Make it a product with TFX

Learn how Google creates ML products using TFX. TFX runs just about anywhere, including in Cloud AI Pipelines. Training your model is just the beginning, but you can go from zero to hero with Production ML by using TFX, and make your amazing application ready for the world!

May 18, 2021  
Speed-up your sites with web-page prefetching using ML

Improve website user experience by training a custom machine learning model with site navigation data to predict next pages, and use an Angular app to prefetch the content and improve site speed.

May 6, 2021  
Using TFX inference with Dataflow for large scale ML inference patterns

Learn how you can efficiently deploy a model using TFX RunInference API with Google Cloud Dataflow. This post provides a walkthrough of common scenarios, from standard inference, to post processing and the use of RunInference API in multiple locations in the pipeline.