tfx.components.Evaluator

A TFX component to evaluate models trained by a TFX Trainer component.

Inherits From: BaseComponent, BaseNode

Used in the notebooks

Used in the tutorials

See Evaluator for more information on what this component's required inputs are, how to configure it, and what outputs it produces.

examples A Channel of type standard_artifacts.Examples, usually produced by an ExampleGen component. required
model A Channel of type standard_artifacts.Model, usually produced by a Trainer component.
baseline_model An optional channel of type 'standard_artifacts.Model' as the baseline model for model diff and model validation purpose.
feature_slicing_spec Deprecated, please use eval_config instead. Only support estimator. evaluator_pb2.FeatureSlicingSpec instance that describes how Evaluator should slice the data. If any field is provided as a RuntimeParameter, feature_slicing_spec should be constructed as a dict with the same field names as FeatureSlicingSpec proto message.
fairness_indicator_thresholds Optional list of float (or RuntimeParameter) threshold values for use with TFMA fairness indicators. Experimental functionality: this interface and functionality may change at any time. to additional documentation for TFMA fairness indicators here.
example_splits Names of splits on which the metrics are computed. Default behavior (when example_splits is set to None or Empty) is using the 'eval' split.
output Channel of ModelEvaluation to store the evaluation results.
model_exports Backwards compatibility alias for the model argument.
instance_name Optional name assigned to this specific instance of Evaluator. Required only if multiple Evaluator components are declared in the same pipeline. Either model_exports or model must be present in the input arguments.
eval_config Instance of tfma.EvalConfig containg configuration settings for running the evaluation. This config has options for both estimator and Keras.
blessing Output channel of 'ModelBlessing' that contains the blessing result.
schema A Schema channel to use for TFXIO.
module_file A path to python module file containing UDFs for Evaluator customization. The module_file can implement following functions at its top level. def custom_eval_shared_model( eval_saved_model_path, model_name, eval_config, **kwargs, ) -> tfma.EvalSharedModel: def custom_extractors( eval_shared_model, eval_config, tensor_adapter_config, ) -> List[tfma.extractors.Extractor]:

component_id DEPRECATED FUNCTION

component_type DEPRECATED FUNCTION
downstream_nodes

exec_properties

id Node id, unique across all TFX nodes in a pipeline.

If id is set by the user, return it directly. otherwise, if instance name (deprecated) is available, node id will be: . otherwise, node id will be:

inputs

outputs

type

upstream_nodes

Child Classes

class DRIVER_CLASS

class SPEC_CLASS

Methods

add_downstream_node

View source

Experimental: Add another component that must run after this one.

This method enables task-based dependencies by enforcing execution order for synchronous pipelines on supported platforms. Currently, the supported platforms are Airflow, Beam, and Kubeflow Pipelines.

Note that this API call should be considered experimental, and may not work with asynchronous pipelines, sub-pipelines and pipelines with conditional nodes. We also recommend relying on data for capturing dependencies where possible to ensure data lineage is fully captured within MLMD.

It is symmetric with add_upstream_node.

Args
downstream_node a component that must run after this node.

add_upstream_node

View source

Experimental: Add another component that must run before this one.

This method enables task-based dependencies by enforcing execution order for synchronous pipelines on supported platforms. Currently, the supported platforms are Airflow, Beam, and Kubeflow Pipelines.

Note that this API call should be considered experimental, and may not work with asynchronous pipelines, sub-pipelines and pipelines with conditional nodes. We also recommend relying on data for capturing dependencies where possible to ensure data lineage is fully captured within MLMD.

It is symmetric with add_downstream_node.

Args
upstream_node a component that must run before this node.

from_json_dict

View source

Convert from dictionary data to an object.

get_id

View source

Gets the id of a node. (deprecated)

This can be used during pipeline authoring time. For example: from tfx.components import Trainer

resolver = ResolverNode(..., model=Channel( type=Model, producer_component_id=Trainer.get_id('my_trainer')))

Args
instance_name (Optional) instance name of a node. If given, the instance name will be taken into consideration when generating the id.

Returns
an id for the node.

to_json_dict

View source

Convert from an object to a JSON serializable dictionary.

with_id

View source

with_platform_config

View source

Attaches a proto-form platform config to a component.

The config will be a per-node platform-specific config.

Args
config platform config to attach to the component.

Returns
the same component itself.

EXECUTOR_SPEC tfx.dsl.components.base.executor_spec.ExecutorClassSpec