Creates an Evaluator for evaluating metrics and plots.
tfma.evaluators.MetricsPlotsAndValidationsEvaluator(
eval_config: tfma.EvalConfig
,
eval_shared_model: Optional[tfma.types.EvalSharedModel
] = None,
metrics_key: str = constants.METRICS_KEY,
plots_key: str = constants.PLOTS_KEY,
attributions_key: str = constants.ATTRIBUTIONS_KEY,
run_after: str = slice_key_extractor.SLICE_KEY_EXTRACTOR_STAGE_NAME,
schema: Optional[schema_pb2.Schema] = None,
random_seed_for_testing: Optional[int] = None,
tensor_adapter_config: Optional[tensor_adapter.TensorAdapterConfig] = None
) -> tfma.evaluators.Evaluator
Args |
eval_config
|
Eval config.
|
eval_shared_model
|
Optional shared model (single-model evaluation) or list
of shared models (multi-model evaluation). Only required if there are
metrics to be computed in-graph using the model.
|
metrics_key
|
Name to use for metrics key in Evaluation output.
|
plots_key
|
Name to use for plots key in Evaluation output.
|
attributions_key
|
Name to use for attributions key in Evaluation output.
|
run_after
|
Extractor to run after (None means before any extractors).
|
schema
|
A schema to use for customizing metrics and plots.
|
random_seed_for_testing
|
Seed to use for unit testing.
|
tensor_adapter_config
|
Tensor adapter config which specifies how to obtain
tensors from the Arrow RecordBatch. The model's signature will be invoked
with those tensors (matched by names). If None, an attempt will be made to
create an adapter based on the model's input signature otherwise the model
will be invoked with raw examples (assuming a signature of a single 1-D
string tensor).
|
Returns |
Evaluator for evaluating metrics and plots. The output will be stored under
'metrics' and 'plots' keys.
|