Creates an Evaluator for evaluating metrics and plots.
tfma.evaluators.MetricsAndPlotsEvaluator(
eval_shared_model: tfma.types.EvalSharedModel
,
desired_batch_size: Optional[int] = None,
metrics_key: str = constants.METRICS_KEY,
plots_key: str = constants.PLOTS_KEY,
run_after: str = slice_key_extractor.SLICE_KEY_EXTRACTOR_STAGE_NAME,
compute_confidence_intervals: Optional[bool] = False,
min_slice_size: int = 1,
serialize=False,
random_seed_for_testing: Optional[int] = None
) -> tfma.evaluators.Evaluator
Args |
eval_shared_model
|
Shared model parameters for EvalSavedModel.
|
desired_batch_size
|
Optional batch size for batching in Aggregate.
|
metrics_key
|
Name to use for metrics key in Evaluation output.
|
plots_key
|
Name to use for plots key in Evaluation output.
|
run_after
|
Extractor to run after (None means before any extractors).
|
compute_confidence_intervals
|
Whether or not to compute confidence
intervals.
|
min_slice_size
|
If the number of examples in a specific slice is less
than min_slice_size, then an error will be returned for that slice.
This will be useful to ensure privacy by not displaying the aggregated
data for smaller number of examples.
|
serialize
|
If true, serialize the metrics to protos as part of the
evaluation as well.
|
random_seed_for_testing
|
Provide for deterministic tests only.
|
Returns |
Evaluator for evaluating metrics and plots. The output will be stored under
'metrics' and 'plots' keys.
|