![]() |
Multi-class confusion matrix metrics at thresholds.
Inherits From: Metric
tfma.metrics.MultiClassConfusionMatrixAtThresholds(
thresholds: Optional[List[float]] = None,
name: Text = MULTI_CLASS_CONFUSION_MATRIX_AT_THRESHOLDS_NAME
)
Computes weighted example counts for all combinations of actual / (top) predicted classes.
The inputs are assumed to contain a single positive label per example (i.e. only one class can be true at a time) while the predictions are assumed to sum to 1.0.
Args | |
---|---|
thresholds
|
Optional thresholds, defaults to 0.5 if not specified. If the top prediction is less than a threshold then the associated example will be assumed to have no prediction associated with it (the predicted_class_id will be set to NO_PREDICTED_CLASS_ID). |
name
|
Metric name. |
Attributes | |
---|---|
compute_confidence_interval
|
Whether to compute confidence intervals for this metric.
Note that this may not completely remove the computational overhead involved in computing a given metric. This is only respected by the jackknife confidence interval method. |
Methods
computations
computations(
eval_config: Optional[tfma.EvalConfig
] = None,
schema: Optional[schema_pb2.Schema] = None,
model_names: Optional[List[Text]] = None,
output_names: Optional[List[Text]] = None,
sub_keys: Optional[List[Optional[SubKey]]] = None,
aggregation_type: Optional[AggregationType] = None,
class_weights: Optional[Dict[int, float]] = None,
query_key: Optional[Text] = None,
is_diff: Optional[bool] = False
) -> tfma.metrics.MetricComputations
Creates computations associated with metric.
get_config
get_config() -> tfma.types.Extracts
Returns serializable config.