View source on GitHub |
Multi-class confusion matrix metrics at thresholds.
Inherits From: Metric
tfma.metrics.MultiClassConfusionMatrixAtThresholds(
thresholds: Optional[List[float]] = None,
name: str = MULTI_CLASS_CONFUSION_MATRIX_AT_THRESHOLDS_NAME
)
Computes weighted example counts for all combinations of actual / (top) predicted classes.
The inputs are assumed to contain a single positive label per example (i.e. only one class can be true at a time) while the predictions are assumed to sum to 1.0.
Methods
computations
computations(
eval_config: Optional[tfma.EvalConfig
] = None,
schema: Optional[schema_pb2.Schema] = None,
model_names: Optional[List[str]] = None,
output_names: Optional[List[str]] = None,
sub_keys: Optional[List[Optional[SubKey]]] = None,
aggregation_type: Optional[AggregationType] = None,
class_weights: Optional[Dict[int, float]] = None,
example_weighted: bool = False,
query_key: Optional[str] = None
) -> tfma.metrics.MetricComputations
Creates computations associated with metric.
from_config
@classmethod
from_config( config: Dict[str, Any] ) -> 'Metric'
get_config
get_config() -> Dict[str, Any]
Returns serializable config.