tfma.metrics.FalsePositives

Calculates the number of false positives.

Inherits From: Metric

If sample_weight is given, calculates the sum of the weights of false positives.

If sample_weight is None, weights default to 1. Use sample_weight of 0 to mask values.

thresholds (Optional) Defaults to [0.5]. A float value or a python list/tuple of float threshold values in [0, 1]. A threshold is compared with prediction values to determine the truth value of predictions (i.e., above the threshold is true, below is false). One metric value is generated for each threshold value.
name (Optional) Metric name.
top_k (Optional) Used with a multi-class model to specify that the top-k values should be used to compute the confusion matrix. The net effect is that the non-top-k values are set to -inf and the matrix is then constructed from the average TP, FP, TN, FN across the classes. When top_k is used, metrics_specs.binarize settings must not be present. Only one of class_id or top_k should be configured. When top_k is set, the default thresholds are [float('-inf')].
class_id (Optional) Used with a multi-class model to specify which class to compute the confusion matrix for. When class_id is used, metrics_specs.binarize settings must not be present. Only one of class_id or top_k should be configured.

compute_confidence_interval Whether to compute confidence intervals for this metric.

Note that this may not completely remove the computational overhead involved in computing a given metric. This is only respected by the jackknife confidence interval method.

Methods

computations

View source

Creates computations associated with metric.

from_config

View source

get_config

View source

Returns serializable config.

result

View source

Function for computing metric value from TP, TN, FP, FN values.