BulkInferrer TFX 流水线组件
使用集合让一切井井有条
根据您的偏好保存内容并对其进行分类。
BulkInferrer TFX 组件可以对无标签数据执行批量推断。生成的 InferenceResult (tensorflow_serving.apis.prediction_log_pb2.PredictionLog) 包含原始特征和预测结果。
BulkInferrer 使用以下内容:
BulkInferrer 发出以下内容:
使用 BulkInferrer 组件
BulkInferrer TFX 组件用于基于无标签的 tf.Examples 执行批量推断。通常将其部署在 Evaluator 组件之后以使用验证的模型执行推断,或者部署在 Trainer 组件之后以直接在导出的模型上执行推断。
目前,它可以执行内存中模型推断和远程推断。远程推断要求模型托管在 Cloud AI Platform 上。
典型的代码如下所示:
bulk_inferrer = BulkInferrer(
examples=examples_gen.outputs['examples'],
model=trainer.outputs['model'],
model_blessing=evaluator.outputs['blessing'],
data_spec=bulk_inferrer_pb2.DataSpec(),
model_spec=bulk_inferrer_pb2.ModelSpec()
)
有关更多详细信息,请参阅 BulkInferrer API 参考。
如未另行说明,那么本页面中的内容已根据知识共享署名 4.0 许可获得了许可,并且代码示例已根据 Apache 2.0 许可获得了许可。有关详情,请参阅 Google 开发者网站政策。Java 是 Oracle 和/或其关联公司的注册商标。
最后更新时间 (UTC):2021-08-16。
[null,null,["最后更新时间 (UTC):2021-08-16。"],[],[],null,["# The BulkInferrer TFX Pipeline Component\n\n\u003cbr /\u003e\n\nThe BulkInferrer TFX component performs batch inference on unlabeled data. The\ngenerated\nInferenceResult([tensorflow_serving.apis.prediction_log_pb2.PredictionLog](https://github.com/tensorflow/serving/blob/master/tensorflow_serving/apis/prediction_log.proto))\ncontains the original features and the prediction results.\n\nBulkInferrer consumes:\n\n- A trained model in [SavedModel](https://www.tensorflow.org/guide/saved_model.md) format.\n- Unlabelled tf.Examples that contain features.\n- (Optional) Validation result from [Evaluator](https://www.tensorflow.org/tfx/guide/evaluator.md) component.\n\nBulkInferrer emits:\n\n- [InferenceResult](https://github.com/tensorflow/tfx/blob/master/tfx/types/standard_artifacts.py)\n\nUsing the BulkInferrer Component\n--------------------------------\n\nA BulkInferrer TFX component is used to perform batch inference on unlabeled\ntf.Examples. It is typically deployed after an\n[Evaluator](https://www.tensorflow.org/tfx/guide/evaluator.md) component to\nperform inference with a validated model, or after a\n[Trainer](https://www.tensorflow.org/tfx/guide/trainer.md) component to directly\nperform inference on exported model.\n\nIt currently performs in-memory model inference and remote inference.\nRemote inference requires the model to be hosted on Cloud AI Platform.\n\nTypical code looks like this: \n\n bulk_inferrer = BulkInferrer(\n examples=examples_gen.outputs['examples'],\n model=trainer.outputs['model'],\n model_blessing=evaluator.outputs['blessing'],\n data_spec=bulk_inferrer_pb2.DataSpec(),\n model_spec=bulk_inferrer_pb2.ModelSpec()\n )\n\nMore details are available in the\n[BulkInferrer API reference](https://www.tensorflow.org/tfx/api_docs/python/tfx/v1/components/BulkInferrer)."]]