tfma.metrics.SetMatchPrecision
Stay organized with collections
Save and categorize content based on your preferences.
Computes precision for sets of labels and predictions.
Inherits From: Precision
, Metric
tfma.metrics.SetMatchPrecision(
thresholds: Optional[Union[float, List[float]]] = None,
top_k: Optional[int] = None,
name: Optional[str] = None,
prediction_class_key: str = 'classes',
prediction_score_key: str = 'scores',
class_key: Optional[str] = None,
weight_key: Optional[str] = None,
**kwargs
)
The metric deals with labels and predictions which are provided in the format
of sets (stored as variable length numpy arrays). The precision is the
micro averaged classification precision. The metric is suitable for the case
where the number of classes is large or the list of classes could not be
provided in advance.
Example:
Label: ['cats'],
Predictions: {'classes': ['cats, dogs']}
The precision is 0.5.
Args |
thresholds
|
(Optional) A float value or a python list/tuple of float
threshold values in [0, 1]. A threshold is compared with prediction
values to determine the truth value of predictions (i.e., above the
threshold is true , below is false ). One metric value is generated
for each threshold value. If neither thresholds nor top_k are set, the
default is to calculate precision with thresholds=0.5 .
|
top_k
|
(Optional) Used with a multi-class model to specify that the top-k
values should be used to compute the confusion matrix. The net effect is
that the non-top-k values are truncated and the matrix is then
constructed from the average TP, FP, TN, FN across the classes. When
top_k is used, metrics_specs.binarize settings must not be present. When
top_k is used, the default threshold is float('-inf'). In this case,
unmatched labels are still considered false negative, since they have
prediction with confidence score float('-inf'),
|
name
|
(Optional) string name of the metric instance.
|
prediction_class_key
|
the key name of the classes in prediction.
|
prediction_score_key
|
the key name of the scores in prediction.
|
class_key
|
(Optional) The key name of the classes in class-weight pairs.
If it is not provided, the classes are assumed to be the label classes.
|
weight_key
|
(Optional) The key name of the weights of classes in
class-weight pairs. The value in this key should be a numpy array of the
same length as the classes in class_key. The key should be stored under
the features key.
|
**kwargs
|
(Optional) Additional args to pass along to init (and eventually
on to _metric_computations and _metric_values). The args are passed to
the precision metric, the confusion matrix metric and binary
classification metric.
|
Attributes |
compute_confidence_interval
|
Whether to compute confidence intervals for this metric.
Note that this may not completely remove the computational overhead
involved in computing a given metric. This is only respected by the
jackknife confidence interval method.
|
Methods
computations
View source
computations(
eval_config: Optional[tfma.EvalConfig
] = None,
schema: Optional[schema_pb2.Schema] = None,
model_names: Optional[List[str]] = None,
output_names: Optional[List[str]] = None,
sub_keys: Optional[List[Optional[SubKey]]] = None,
aggregation_type: Optional[AggregationType] = None,
class_weights: Optional[Dict[int, float]] = None,
example_weighted: bool = False,
query_key: Optional[str] = None
) -> tfma.metrics.MetricComputations
Creates computations associated with metric.
from_config
View source
@classmethod
from_config(
config: Dict[str, Any]
) -> 'Metric'
get_config
View source
get_config() -> Dict[str, Any]
Returns serializable config.
result
View source
result(
tp: float, tn: float, fp: float, fn: float
) -> float
Function for computing metric value from TP, TN, FP, FN values.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-04-26 UTC.
[null,null,["Last updated 2024-04-26 UTC."],[],[],null,["# tfma.metrics.SetMatchPrecision\n\n\u003cbr /\u003e\n\n|---------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/model-analysis/blob/v0.46.0/tensorflow_model_analysis/metrics/set_match_confusion_matrix_metrics.py#L27-L146) |\n\nComputes precision for sets of labels and predictions.\n\nInherits From: [`Precision`](../../tfma/metrics/Precision), [`Metric`](../../tfma/metrics/Metric) \n\n tfma.metrics.SetMatchPrecision(\n thresholds: Optional[Union[float, List[float]]] = None,\n top_k: Optional[int] = None,\n name: Optional[str] = None,\n prediction_class_key: str = 'classes',\n prediction_score_key: str = 'scores',\n class_key: Optional[str] = None,\n weight_key: Optional[str] = None,\n **kwargs\n )\n\nThe metric deals with labels and predictions which are provided in the format\nof sets (stored as variable length numpy arrays). The precision is the\nmicro averaged classification precision. The metric is suitable for the case\nwhere the number of classes is large or the list of classes could not be\nprovided in advance.\n\n#### Example:\n\nLabel: \\['cats'\\],\nPredictions: {'classes': \\['cats, dogs'\\]}\n\nThe precision is 0.5.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `thresholds` | (Optional) A float value or a python list/tuple of float threshold values in \\[0, 1\\]. A threshold is compared with prediction values to determine the truth value of predictions (i.e., above the threshold is `true`, below is `false`). One metric value is generated for each threshold value. If neither thresholds nor top_k are set, the default is to calculate precision with `thresholds=0.5`. |\n| `top_k` | (Optional) Used with a multi-class model to specify that the top-k values should be used to compute the confusion matrix. The net effect is that the non-top-k values are truncated and the matrix is then constructed from the average TP, FP, TN, FN across the classes. When top_k is used, metrics_specs.binarize settings must not be present. When top_k is used, the default threshold is float('-inf'). In this case, unmatched labels are still considered false negative, since they have prediction with confidence score float('-inf'), |\n| `name` | (Optional) string name of the metric instance. |\n| `prediction_class_key` | the key name of the classes in prediction. |\n| `prediction_score_key` | the key name of the scores in prediction. |\n| `class_key` | (Optional) The key name of the classes in class-weight pairs. If it is not provided, the classes are assumed to be the label classes. |\n| `weight_key` | (Optional) The key name of the weights of classes in class-weight pairs. The value in this key should be a numpy array of the same length as the classes in class_key. The key should be stored under the features key. |\n| `**kwargs` | (Optional) Additional args to pass along to init (and eventually on to _metric_computations and _metric_values). The args are passed to the precision metric, the confusion matrix metric and binary classification metric. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Attributes ---------- ||\n|-------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `compute_confidence_interval` | Whether to compute confidence intervals for this metric. \u003cbr /\u003e Note that this may not completely remove the computational overhead involved in computing a given metric. This is only respected by the jackknife confidence interval method. |\n\n\u003cbr /\u003e\n\nMethods\n-------\n\n### `computations`\n\n[View source](https://github.com/tensorflow/model-analysis/blob/v0.46.0/tensorflow_model_analysis/metrics/metric_types.py#L862-L888) \n\n computations(\n eval_config: Optional[../../tfma/EvalConfig] = None,\n schema: Optional[schema_pb2.Schema] = None,\n model_names: Optional[List[str]] = None,\n output_names: Optional[List[str]] = None,\n sub_keys: Optional[List[Optional[SubKey]]] = None,\n aggregation_type: Optional[AggregationType] = None,\n class_weights: Optional[Dict[int, float]] = None,\n example_weighted: bool = False,\n query_key: Optional[str] = None\n ) -\u003e ../../tfma/metrics/MetricComputations\n\nCreates computations associated with metric.\n\n### `from_config`\n\n[View source](https://github.com/tensorflow/model-analysis/blob/v0.46.0/tensorflow_model_analysis/metrics/metric_types.py#L842-L847) \n\n @classmethod\n from_config(\n config: Dict[str, Any]\n ) -\u003e 'Metric'\n\n### `get_config`\n\n[View source](https://github.com/tensorflow/model-analysis/blob/v0.46.0/tensorflow_model_analysis/metrics/confusion_matrix_metrics.py#L252-L262) \n\n get_config() -\u003e Dict[str, Any]\n\nReturns serializable config.\n\n### `result`\n\n[View source](https://github.com/tensorflow/model-analysis/blob/v0.46.0/tensorflow_model_analysis/metrics/confusion_matrix_metrics.py#L1277-L1279) \n\n result(\n tp: float, tn: float, fp: float, fn: float\n ) -\u003e float\n\nFunction for computing metric value from TP, TN, FP, FN values."]]