tfx_bsl.public.beam.run_inference.ModelHandler

Has the ability to load and apply an ML model.

Methods

batch_elements_kwargs

Returns: kwargs suitable for beam.BatchElements.

get_metrics_namespace

Returns: A namespace for metrics collected by RunInference transform.

get_num_bytes

Returns: The number of bytes of data for a batch.

get_resource_hints

Returns: Resource hints for the transform.

load_model

Loads and initializes a model for processing.

run_inference

Runs inferences on a batch of examples.

Args
batch A sequence of examples or features.
model The model used to make inferences.
inference_args Extra arguments for models whose inference call requires extra parameters.

Returns
An Iterable of Predictions.

validate_inference_args

Validates inference_args passed in the inference call.

Most frameworks do not need extra arguments in their predict() call so the default behavior is to error out if inference_args are present.