tfx.v1.extensions.google_cloud_ai_platform.BulkInferrer

A Cloud AI component to do batch inference on a remote hosted model.

Inherits From: BaseComponent, BaseNode

BulkInferrer component will push a model to Google Cloud AI Platform, consume examples data, send request to the remote hosted model, and produces the inference results to an external location as PredictionLog proto. After inference, it will delete the model from Google Cloud AI Platform.

Component outputs contains:

examples A Channel of type standard_artifacts.Examples, usually produced by an ExampleGen component. required
model A Channel of type standard_artifacts.Model, usually produced by a Trainer component.
model_blessing A Channel of type standard_artifacts.ModelBlessing, usually produced by a ModelValidator component.
data_spec bulk_inferrer_pb2.DataSpec instance that describes data selection.
output_example_spec bulk_inferrer_pb2.OutputExampleSpec instance, specify if you want BulkInferrer to output examples instead of inference result.
custom_config A dict which contains the deployment job parameters to be passed to Google Cloud AI Platform. custom_config.ai_platform_serving_args need to contain the serving job parameters. For the full set of parameters, refer to https://cloud.google.com/ml-engine/reference/rest/v1/projects.models

ValueError Must not specify inference_result or output_examples depends on whether output_example_spec is set or not.

outputs Component's output channel dict.

Methods

with_node_execution_options

POST_EXECUTABLE_SPEC None
PRE_EXECUTABLE_SPEC None