tfx_bsl.public.beam.run_inference.RunInferenceOnKeyedBatches

Run inference over pre-batched keyed inputs.

This API is experimental and may change in the future.

Supports the same inference specs as RunInference. Inputs must consist of a keyed list of examples, and outputs consist of keyed list of prediction logs corresponding by index.

examples A PCollection of keyed, batched inputs of type Example, SequenceExample, or bytes. Each type support inference specs corresponding to the unbatched cases described in RunInference. Supports

  • PCollection[Tuple[K, List[Example]]]
  • PCollection[Tuple[K, List[SequenceExample]]]
  • PCollection[Tuple[K, List[Bytes]]]
inference_spec_type Model inference endpoint.

A PCollection of Tuple[K, List[PredictionLog]].