tfma.metrics.specs_from_metrics

Returns specs for tf.keras.metrics/losses or tfma.metrics classes.

Examples:

metrics_specs = specs_from_metrics([ tf.keras.metrics.BinaryAccuracy(), tf.keras.metrics.AUC(), tf.keras.metrics.Precision(), tf.keras.metrics.Recall(), tfma.metrics.MeanLabel(), tfma.metrics.MeanPrediction() ... ])

metrics_specs = specs_from_metrics({ 'output1': [ tf.keras.metrics.BinaryAccuracy(), tf.keras.metrics.AUC(), tfma.metrics.MeanLabel(), tfma.metrics.MeanPrediction() ... ], 'output2': [ tf.keras.metrics.Precision(), tf.keras.metrics.Recall(), ] })

metrics List of tf.keras.metrics.Metric, tf.keras.losses.Loss, or tfma.metrics.Metric. For multi-output models a dict of dicts may be passed where the first dict is indexed by the output_name.
model_names Optional model names (if multi-model evaluation).
output_names Optional output names (if multi-output models). If the metrics are a dict this should not be set.
binarize Optional settings for binarizing multi-class/multi-label metrics.
aggregate Optional settings for aggregating multi-class/multi-label metrics.
query_key Optional query key for query/ranking based metrics.
include_example_count True to add example_count metric. Default is True.
include_weighted_example_count True to add weighted_example_count metric. Default is True. A weighted example count will be added per output for multi-output models.