tfma.metrics.ObjectDetectionPrecision
Stay organized with collections
Save and categorize content based on your preferences.
Computes the precision of the predictions with respect to the labels.
Inherits From: Precision
, Metric
tfma.metrics.ObjectDetectionPrecision(
thresholds: Optional[Union[float, List[float]]] = None,
name: Optional[str] = None,
iou_threshold: Optional[float] = None,
class_id: Optional[int] = None,
class_weight: Optional[float] = None,
area_range: Optional[Tuple[float, float]] = None,
max_num_detections: Optional[int] = None,
labels_to_stack: Optional[List[str]] = None,
predictions_to_stack: Optional[List[str]] = None,
num_detections_key: Optional[str] = None,
allow_missing_key: bool = False
)
The metric uses true positives and false positives to compute precision by
dividing the true positives by the sum of true positives and false positives.
If sample_weight
is None
, weights default to 1.
Use sample_weight
of 0 to mask values.
Args |
thresholds
|
(Optional) A float value or a python list/tuple of float
threshold values in [0, 1]. A threshold is compared with prediction
values to determine the truth value of predictions (i.e., above the
threshold is true , below is false ). One metric value is generated
for each threshold value. The default is to calculate precision with
thresholds=0.5 .
|
name
|
(Optional) string name of the metric instance.
|
iou_threshold
|
(Optional) Thresholds for a detection and ground truth pair
with specific iou to be considered as a match. Default to 0.5
|
class_id
|
(Optional) The class id for calculating metrics.
|
class_weight
|
(Optional) The weight associated with the object class id.
|
area_range
|
(Optional) A tuple (inclusive) representing the area-range for
objects to be considered for metrics. Default to (0, inf).
|
max_num_detections
|
(Optional) The maximum number of detections for a
single image. Default to None.
|
labels_to_stack
|
(Optional) Keys for columns to be stacked as a single
numpy array as the labels. It is searched under the key labels, features
and transformed features. The desired format is [left bounadary, top
boudnary, right boundary, bottom boundary, class id]. e.g. ['xmin',
'ymin', 'xmax', 'ymax', 'class_id']
|
predictions_to_stack
|
(Optional) Output names for columns to be stacked as
a single numpy array as the prediction. It should be the model's output
names. The desired format is [left bounadary, top boudnary, right
boundary, bottom boundary, class id, confidence score]. e.g. ['xmin',
'ymin', 'xmax', 'ymax', 'class_id', 'scores']
|
num_detections_key
|
(Optional) An output name in which to find the number
of detections to use for evaluation for a given example. It does nothing
if predictions_to_stack is not set. The value for this output should be
a scalar value or a single-value tensor. The stacked predicitions will
be truncated with the specified number of detections.
|
allow_missing_key
|
(Optional) If true, the preprocessor will return empty
array instead of raising errors.
|
Attributes |
compute_confidence_interval
|
Whether to compute confidence intervals for this metric.
Note that this may not completely remove the computational overhead
involved in computing a given metric. This is only respected by the
jackknife confidence interval method.
|
Methods
computations
View source
computations(
eval_config: Optional[tfma.EvalConfig
] = None,
schema: Optional[schema_pb2.Schema] = None,
model_names: Optional[List[str]] = None,
output_names: Optional[List[str]] = None,
sub_keys: Optional[List[Optional[SubKey]]] = None,
aggregation_type: Optional[AggregationType] = None,
class_weights: Optional[Dict[int, float]] = None,
example_weighted: bool = False,
query_key: Optional[str] = None
) -> tfma.metrics.MetricComputations
Creates computations associated with metric.
from_config
View source
@classmethod
from_config(
config: Dict[str, Any]
) -> 'Metric'
get_config
View source
get_config() -> Dict[str, Any]
Returns serializable config.
result
View source
result(
tp: float, tn: float, fp: float, fn: float
) -> float
Function for computing metric value from TP, TN, FP, FN values.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-04-26 UTC.
[null,null,["Last updated 2024-04-26 UTC."],[],[],null,["# tfma.metrics.ObjectDetectionPrecision\n\n\u003cbr /\u003e\n\n|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/model-analysis/blob/v0.46.0/tensorflow_model_analysis/metrics/object_detection_confusion_matrix_metrics.py#L341-L482) |\n\nComputes the precision of the predictions with respect to the labels.\n\nInherits From: [`Precision`](../../tfma/metrics/Precision), [`Metric`](../../tfma/metrics/Metric) \n\n tfma.metrics.ObjectDetectionPrecision(\n thresholds: Optional[Union[float, List[float]]] = None,\n name: Optional[str] = None,\n iou_threshold: Optional[float] = None,\n class_id: Optional[int] = None,\n class_weight: Optional[float] = None,\n area_range: Optional[Tuple[float, float]] = None,\n max_num_detections: Optional[int] = None,\n labels_to_stack: Optional[List[str]] = None,\n predictions_to_stack: Optional[List[str]] = None,\n num_detections_key: Optional[str] = None,\n allow_missing_key: bool = False\n )\n\nThe metric uses true positives and false positives to compute precision by\ndividing the true positives by the sum of true positives and false positives.\n\nIf `sample_weight` is `None`, weights default to 1.\nUse `sample_weight` of 0 to mask values.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `thresholds` | (Optional) A float value or a python list/tuple of float threshold values in \\[0, 1\\]. A threshold is compared with prediction values to determine the truth value of predictions (i.e., above the threshold is `true`, below is `false`). One metric value is generated for each threshold value. The default is to calculate precision with `thresholds=0.5`. |\n| `name` | (Optional) string name of the metric instance. |\n| `iou_threshold` | (Optional) Thresholds for a detection and ground truth pair with specific iou to be considered as a match. Default to 0.5 |\n| `class_id` | (Optional) The class id for calculating metrics. |\n| `class_weight` | (Optional) The weight associated with the object class id. |\n| `area_range` | (Optional) A tuple (inclusive) representing the area-range for objects to be considered for metrics. Default to (0, inf). |\n| `max_num_detections` | (Optional) The maximum number of detections for a single image. Default to None. |\n| `labels_to_stack` | (Optional) Keys for columns to be stacked as a single numpy array as the labels. It is searched under the key labels, features and transformed features. The desired format is \\[left bounadary, top boudnary, right boundary, bottom boundary, class id\\]. e.g. \\['xmin', 'ymin', 'xmax', 'ymax', 'class_id'\\] |\n| `predictions_to_stack` | (Optional) Output names for columns to be stacked as a single numpy array as the prediction. It should be the model's output names. The desired format is \\[left bounadary, top boudnary, right boundary, bottom boundary, class id, confidence score\\]. e.g. \\['xmin', 'ymin', 'xmax', 'ymax', 'class_id', 'scores'\\] |\n| `num_detections_key` | (Optional) An output name in which to find the number of detections to use for evaluation for a given example. It does nothing if predictions_to_stack is not set. The value for this output should be a scalar value or a single-value tensor. The stacked predicitions will be truncated with the specified number of detections. |\n| `allow_missing_key` | (Optional) If true, the preprocessor will return empty array instead of raising errors. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Attributes ---------- ||\n|-------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `compute_confidence_interval` | Whether to compute confidence intervals for this metric. \u003cbr /\u003e Note that this may not completely remove the computational overhead involved in computing a given metric. This is only respected by the jackknife confidence interval method. |\n\n\u003cbr /\u003e\n\nMethods\n-------\n\n### `computations`\n\n[View source](https://github.com/tensorflow/model-analysis/blob/v0.46.0/tensorflow_model_analysis/metrics/metric_types.py#L862-L888) \n\n computations(\n eval_config: Optional[../../tfma/EvalConfig] = None,\n schema: Optional[schema_pb2.Schema] = None,\n model_names: Optional[List[str]] = None,\n output_names: Optional[List[str]] = None,\n sub_keys: Optional[List[Optional[SubKey]]] = None,\n aggregation_type: Optional[AggregationType] = None,\n class_weights: Optional[Dict[int, float]] = None,\n example_weighted: bool = False,\n query_key: Optional[str] = None\n ) -\u003e ../../tfma/metrics/MetricComputations\n\nCreates computations associated with metric.\n\n### `from_config`\n\n[View source](https://github.com/tensorflow/model-analysis/blob/v0.46.0/tensorflow_model_analysis/metrics/metric_types.py#L842-L847) \n\n @classmethod\n from_config(\n config: Dict[str, Any]\n ) -\u003e 'Metric'\n\n### `get_config`\n\n[View source](https://github.com/tensorflow/model-analysis/blob/v0.46.0/tensorflow_model_analysis/metrics/confusion_matrix_metrics.py#L252-L262) \n\n get_config() -\u003e Dict[str, Any]\n\nReturns serializable config.\n\n### `result`\n\n[View source](https://github.com/tensorflow/model-analysis/blob/v0.46.0/tensorflow_model_analysis/metrics/confusion_matrix_metrics.py#L1277-L1279) \n\n result(\n tp: float, tn: float, fp: float, fn: float\n ) -\u003e float\n\nFunction for computing metric value from TP, TN, FP, FN values."]]