tfma.metrics.COCOAveragePrecision

Confusion matrix at thresholds.

Inherits From: Metric

It computes the average precision of object detections for a single class and a single iou_threshold.

num_thresholds (Optional) Number of thresholds to use for calculating the matrices and finding the precision at given recall.
iou_threshold (Optional) Threholds for a detection and ground truth pair with specific iou to be considered as a match.
class_id (Optional) The class id for calculating metrics.
class_weight (Optional) The weight associated with the object class id.
area_range (Optional) The area-range for objects to be considered for metrics.
max_num_detections (Optional) The maximum number of detections for a single image.
recalls (Optional) recalls at which precisions will be calculated.
num_recalls (Optional) Used for objecth detection, the number of recalls for calculating average precision, it equally generates points bewteen 0 and 1. (Only one of recalls and num_recalls should be used).
name (Optional) string name of the metric instance.
labels_to_stack (Optional) Keys for columns to be stacked as a single numpy array as the labels. It is searched under the key labels, features and transformed features. The desired format is [left bounadary, top boudnary, right boundary, bottom boundary, class id]. e.g. ['xmin', 'ymin', 'xmax', 'ymax', 'class_id']
predictions_to_stack (Optional) Output names for columns to be stacked as a single numpy array as the prediction. It should be the model's output names. The desired format is [left bounadary, top boudnary, right boundary, bottom boundary, class id, confidence score]. e.g. ['xmin', 'ymin', 'xmax', 'ymax', 'class_id', 'scores']
num_detections_key (Optional) An output name in which to find the number of detections to use for evaluation for a given example. It does nothing if predictions_to_stack is not set. The value for this output should be a scalar value or a single-value tensor. The stacked predicitions will be truncated with the specified number of detections.
allow_missing_key (Optional) If true, the preprocessor will return empty array instead of raising errors.

compute_confidence_interval Whether to compute confidence intervals for this metric.

Note that this may not completely remove the computational overhead involved in computing a given metric. This is only respected by the jackknife confidence interval method.

Methods

computations

View source

Creates computations associated with metric.

from_config

View source

get_config

View source

Returns serializable config.