The BulkInferrer TFX Pipeline Component
Stay organized with collections
Save and categorize content based on your preferences.
The BulkInferrer TFX component performs batch inference on unlabeled data. The
generated
InferenceResult(tensorflow_serving.apis.prediction_log_pb2.PredictionLog)
contains the original features and the prediction results.
BulkInferrer consumes:
- A trained model in
SavedModel format.
- Unlabelled tf.Examples that contain features.
- (Optional) Validation result from
Evaluator component.
BulkInferrer emits:
Using the BulkInferrer Component
A BulkInferrer TFX component is used to perform batch inference on unlabeled
tf.Examples. It is typically deployed after an
Evaluator component to
perform inference with a validated model, or after a
Trainer component to directly
perform inference on exported model.
It currently performs in-memory model inference and remote inference.
Remote inference requires the model to be hosted on Cloud AI Platform.
Typical code looks like this:
bulk_inferrer = BulkInferrer(
examples=examples_gen.outputs['examples'],
model=trainer.outputs['model'],
model_blessing=evaluator.outputs['blessing'],
data_spec=bulk_inferrer_pb2.DataSpec(),
model_spec=bulk_inferrer_pb2.ModelSpec()
)
More details are available in the
BulkInferrer API reference.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-09-06 UTC.
[null,null,["Last updated 2024-09-06 UTC."],[],[],null,["# The BulkInferrer TFX Pipeline Component\n\n\u003cbr /\u003e\n\nThe BulkInferrer TFX component performs batch inference on unlabeled data. The\ngenerated\nInferenceResult([tensorflow_serving.apis.prediction_log_pb2.PredictionLog](https://github.com/tensorflow/serving/blob/master/tensorflow_serving/apis/prediction_log.proto))\ncontains the original features and the prediction results.\n\nBulkInferrer consumes:\n\n- A trained model in [SavedModel](https://www.tensorflow.org/guide/saved_model.md) format.\n- Unlabelled tf.Examples that contain features.\n- (Optional) Validation result from [Evaluator](https://www.tensorflow.org/tfx/guide/evaluator.md) component.\n\nBulkInferrer emits:\n\n- [InferenceResult](https://github.com/tensorflow/tfx/blob/master/tfx/types/standard_artifacts.py)\n\nUsing the BulkInferrer Component\n--------------------------------\n\nA BulkInferrer TFX component is used to perform batch inference on unlabeled\ntf.Examples. It is typically deployed after an\n[Evaluator](https://www.tensorflow.org/tfx/guide/evaluator.md) component to\nperform inference with a validated model, or after a\n[Trainer](https://www.tensorflow.org/tfx/guide/trainer.md) component to directly\nperform inference on exported model.\n\nIt currently performs in-memory model inference and remote inference.\nRemote inference requires the model to be hosted on Cloud AI Platform.\n\nTypical code looks like this: \n\n bulk_inferrer = BulkInferrer(\n examples=examples_gen.outputs['examples'],\n model=trainer.outputs['model'],\n model_blessing=evaluator.outputs['blessing'],\n data_spec=bulk_inferrer_pb2.DataSpec(),\n model_spec=bulk_inferrer_pb2.ModelSpec()\n )\n\nMore details are available in the\n[BulkInferrer API reference](https://www.tensorflow.org/tfx/api_docs/python/tfx/v1/components/BulkInferrer)."]]