Deploy a model to Google Cloud AI Platform serving.
tfx.dsl.components.base.base_executor.BaseExecutor.Context] = None
input_dict: Dict[Text, List[types.Artifact]]
) -> bool
Check that model is blessed by upstream validators.
Input dict from input key to a list of artifacts:
- model_blessing: A
ModelBlessing artifact from model validator or
Pusher looks for a custom property
blessed in the artifact to check
it is safe to push.
- infra_blessing: An
InfraBlessing artifact from infra validator.
Pusher looks for a custom proeprty
blessed in the artifact to
determine whether the model is mechanically servable from the model
server to which Pusher is going to push.
True if the model is blessed by validator.
input_dict: Dict[Text, List[types.Artifact]],
output_dict: Dict[Text, List[types.Artifact]],
exec_properties: Dict[Text, Any]
Overrides the tfx_pusher_executor.
Input dict from input key to a list of artifacts, including:
- model_export: exported model from trainer.
- model_blessing: model blessing path from evaluator.
Output dict from key to a list of artifacts, including:
model_push: A list of 'ModelPushPath' artifact of size one. It will
include the model in this push execution if the model was pushed.
Mostly a passthrough input dict for
tfx.components.Pusher.executor. The following keys in
are consumed by this class:
ai_platform_serving_args: For the full set of parameters supported
by Google Cloud AI Platform, refer to
endpoint: Optional endpoint override. Should be in format of
https://[region]-ml.googleapis.com. Default to global endpoint if
not set. Using regional endpoint is recommended by Cloud AI Platform.
When set, 'regions' key in ai_platform_serving_args cannot be set.
For more details, please see
If ai_platform_serving_args is not in exec_properties.custom_config.
If Serving model path does not start with gs://.
If 'endpoint' and 'regions' are set simultanuously.
if the Google Cloud AI Platform training job failed.