Creates a Beam ModelHandler based on the inference spec type.
tfx_bsl.public.beam.run_inference.CreateModelHandler(
inference_spec_type: tfx_bsl.public.proto.model_spec_pb2.InferenceSpecType
) -> tfx_bsl.public.beam.run_inference.ModelHandler
There are two model handlers:
- In-process inference from a SavedModel instance. Used when
saved_model_spec
field is set in inference_spec_type
.
- Remote inference by using a service endpoint. Used when
ai_platform_prediction_model_spec
field is set in
inference_spec_type
.
Example Usage |
from apache_beam.ml.inference import base
tf_handler = CreateModelHandler(inference_spec_type)
# unkeyed
base.RunInference(tf_handler)
# keyed
base.RunInference(base.KeyedModelHandler(tf_handler))
|
Args |
inference_spec_type
|
Model inference endpoint.
|
Returns |
A Beam RunInference ModelHandler for TensorFlow
|