Has the ability to load and apply an ML model.



Returns: kwargs suitable for beam.BatchElements.


Returns: A namespace for metrics collected by the RunInference transform.


Returns: The number of bytes of data for a batch.


Gets all postprocessing functions to be run after inference. Functions are in order that they should be applied.


Gets all preprocessing functions to be run before batching/inference. Functions are in order that they should be applied.


Returns: Resource hints for the transform.


Loads and initializes a model for processing.


Runs inferences on a batch of examples.

batch A sequence of examples or features.
model The model used to make inferences.
inference_args Extra arguments for models whose inference call requires extra parameters.

An Iterable of Predictions.


Sets environment variables using a dictionary provided via kwargs. Keys are the env variable name, and values are the env variable value. Child ModelHandler classes should set _env_vars via kwargs in init, or else call super().init().


Returns a boolean representing whether or not a model should be shared across multiple processes instead of being loaded per process. This is primary useful for large models that can't fit multiple copies in memory. Multi-process support may vary by runner, but this will fallback to loading per process as necessary. See


Update the model paths produced by side inputs.


Validates inference_args passed in the inference call.

Because most frameworks do not need extra arguments in their predict() call, the default behavior is to error out if inference_args are present.


Returns a new ModelHandler with a postprocessing function associated with it. The postprocessing function will be run after inference and should map the base ModelHandler's output type to your desired output type. If you apply multiple postprocessing functions, they will be run on your original inference result in order from first applied to last applied.


Returns a new ModelHandler with a preprocessing function associated with it. The preprocessing function will be run before batching/inference and should map your input PCollection to the base ModelHandler's input type. If you apply multiple preprocessing functions, they will be run on your original PCollection in order from last applied to first applied.