![]() |
A TFX component to validate the model against the serving infrastructure.
Inherits From: BaseComponent
, BaseNode
tfx.components.InfraValidator(
model: tfx.types.Channel
,
serving_spec: infra_validator_pb2.ServingSpec,
examples: Optional[tfx.types.Channel
] = None,
blessing: Optional[tfx.types.Channel
] = None,
request_spec: Optional[infra_validator_pb2.RequestSpec] = None,
validation_spec: Optional[infra_validator_pb2.ValidationSpec] = None,
instance_name: Optional[Text] = None
)
An infra validation is done by loading the model to the exactly same serving binary that is used in production, and additionaly sending some requests to the model server. Such requests can be specified from Examples artifact.
Examples
Full example using TensorFlowServing binary running on local docker.
infra_validator = InfraValidator(
model=trainer.outputs['model'],
examples=test_example_gen.outputs['examples'],
serving_spec=ServingSpec(
tensorflow_serving=TensorFlowServing( # Using TF Serving.
tags=['latest']
),
local_docker=LocalDockerConfig(), # Running on local docker.
),
validation_spec=ValidationSpec(
max_loading_time_seconds=60,
num_tries=5,
),
request_spec=RequestSpec(
tensorflow_serving=TensorFlowServingRequestSpec(),
num_examples=1,
)
)
Minimal example when running on Kubernetes.
infra_validator = InfraValidator(
model=trainer.outputs['model'],
examples=test_example_gen.outputs['examples'],
serving_spec=ServingSpec(
tensorflow_serving=TensorFlowServing(
tags=['latest']
),
kubernetes=KubernetesConfig(), # Running on Kubernetes.
),
)
Args | |
---|---|
model
|
A Channel of ModelExportPath type, usually produced by
Trainer component.
required
|
serving_spec
|
A ServingSpec configuration about serving binary and
test platform config to launch model server for validation. required
|
examples
|
A Channel of ExamplesPath type, usually produced by
ExampleGen component.
If not specified, InfraValidator does not issue requests for validation.
|
blessing
|
Output Channel of InfraBlessingPath that contains the
validation result.
|
request_spec
|
Optional RequestSpec configuration about making requests
from examples input. If not specified, InfraValidator does not issue
requests for validation.
|
validation_spec
|
Optional ValidationSpec configuration.
|
instance_name
|
Optional name assigned to this specific instance of InfraValidator. Required only if multiple InfraValidator components are declared in the same pipeline. |
Attributes | |
---|---|
component_id
|
|
component_type
|
|
downstream_nodes
|
|
exec_properties
|
|
id
|
Node id, unique across all TFX nodes in a pipeline.
If |
inputs
|
|
outputs
|
|
type
|
|
upstream_nodes
|
Child Classes
Methods
add_downstream_node
add_downstream_node(
downstream_node
)
Experimental: Add another component that must run after this one.
This method enables task-based dependencies by enforcing execution order for synchronous pipelines on supported platforms. Currently, the supported platforms are Airflow, Beam, and Kubeflow Pipelines.
Note that this API call should be considered experimental, and may not work with asynchronous pipelines, sub-pipelines and pipelines with conditional nodes. We also recommend relying on data for capturing dependencies where possible to ensure data lineage is fully captured within MLMD.
It is symmetric with add_upstream_node
.
Args | |
---|---|
downstream_node
|
a component that must run after this node. |
add_upstream_node
add_upstream_node(
upstream_node
)
Experimental: Add another component that must run before this one.
This method enables task-based dependencies by enforcing execution order for synchronous pipelines on supported platforms. Currently, the supported platforms are Airflow, Beam, and Kubeflow Pipelines.
Note that this API call should be considered experimental, and may not work with asynchronous pipelines, sub-pipelines and pipelines with conditional nodes. We also recommend relying on data for capturing dependencies where possible to ensure data lineage is fully captured within MLMD.
It is symmetric with add_downstream_node
.
Args | |
---|---|
upstream_node
|
a component that must run before this node. |
from_json_dict
@classmethod
from_json_dict( dict_data: Dict[Text, Any] ) -> Any
Convert from dictionary data to an object.
get_id
@classmethod
get_id( instance_name: Optional[Text] = None )
Gets the id of a node.
This can be used during pipeline authoring time. For example: from tfx.components import Trainer
resolver = ResolverNode(..., model=Channel( type=Model, producer_component_id=Trainer.get_id('my_trainer')))
Args | |
---|---|
instance_name
|
(Optional) instance name of a node. If given, the instance name will be taken into consideration when generating the id. |
Returns | |
---|---|
an id for the node. |
to_json_dict
to_json_dict() -> Dict[Text, Any]
Convert from an object to a JSON serializable dictionary.
with_id
with_id(
id: Text
) -> "BaseNode"
with_platform_config
with_platform_config(
config: message.Message
) -> "BaseComponent"
Attaches a proto-form platform config to a component.
The config will be a per-node platform-specific config.
Args | |
---|---|
config
|
platform config to attach to the component. |
Returns | |
---|---|
the same component itself. |
Class Variables | |
---|---|
EXECUTOR_SPEC |
tfx.dsl.components.base.executor_spec.ExecutorClassSpec
|