Run inference with a model.
tfx_bsl.public.beam.RunInference(
examples: beam.pvalue.PCollection,
inference_spec_type: tfx_bsl.public.proto.model_spec_pb2.InferenceSpecType
) -> beam.pvalue.PCollection
There are two types of inference you can perform using this PTransform:
- In-process inference from a SavedModel instance. Used when
saved_model_spec
field is set in inference_spec_type
.
- Remote inference by using a service endpoint. Used when
ai_platform_prediction_model_spec
field is set in
inference_spec_type
.
Args |
examples
|
A PCollection containing examples of the following possible kinds,
each with their corresponding return type.
PCollection[Example] -> PCollection[PredictionLog]
- Works with Classify, Regress, MultiInference, Predict and
RemotePredict.
PCollection[SequenceExample] -> PCollection[PredictionLog]
- Works with Predict and (serialized) RemotePredict.
PCollection[bytes] -> PCollection[PredictionLog]
For serialized Example: Works with Classify, Regress,
MultiInference, Predict and RemotePredict.
For everything else: Works with Predict and RemotePredict.
PCollection[Tuple[K, Example]] -> PCollection[
Tuple[K, PredictionLog]]
- Works with Classify, Regress, MultiInference, Predict and
RemotePredict.
PCollection[Tuple[K, SequenceExample]] -> PCollection[
Tuple[K, PredictionLog]]
- Works with Predict and (serialized) RemotePredict.
PCollection[Tuple[K, bytes]] -> PCollection[
Tuple[K, PredictionLog]]
- For serialized Example: Works with Classify, Regress,
MultiInference, Predict and RemotePredict.
- For everything else: Works with Predict and RemotePredict.
|
inference_spec_type
|
Model inference endpoint.
|
Returns |
A PCollection (possibly keyed) containing prediction logs.
|