Computes the recall of the predictions with respect to the labels.
Inherits From: Metric
tfma.metrics.Recall(
thresholds: Optional[Union[float, List[float]]] = None,
top_k: Optional[int] = None,
class_id: Optional[int] = None,
name: Optional[str] = None,
**kwargs
)
The metric uses true positives and false negatives to compute recall by
dividing the true positives by the sum of true positives and false negatives.
If sample_weight
is None
, weights default to 1.
Use sample_weight
of 0 to mask values.
Args |
thresholds
|
(Optional) A float value or a python list/tuple of float
threshold values in [0, 1]. A threshold is compared with prediction
values to determine the truth value of predictions (i.e., above the
threshold is true , below is false ). One metric value is generated
for each threshold value. If neither thresholds nor top_k are set, the
default is to calculate precision with thresholds=0.5 .
|
top_k
|
(Optional) Used with a multi-class model to specify that the top-k
values should be used to compute the confusion matrix. The net effect is
that the non-top-k values are set to -inf and the matrix is then
constructed from the average TP, FP, TN, FN across the classes. When
top_k is used, metrics_specs.binarize settings must not be present. Only
one of class_id or top_k should be configured. When top_k is set, the
default thresholds are [float('-inf')].
|
class_id
|
(Optional) Used with a multi-class model to specify which class
to compute the confusion matrix for. When class_id is used,
metrics_specs.binarize settings must not be present. Only one of
class_id or top_k should be configured.
|
name
|
(Optional) string name of the metric instance.
|
**kwargs
|
(Optional) Additional args to pass along to init (and eventually
on to _metric_computation and _metric_value)
|
Attributes |
compute_confidence_interval
|
Whether to compute confidence intervals for this metric.
Note that this may not completely remove the computational overhead
involved in computing a given metric. This is only respected by the
jackknife confidence interval method.
|
Methods
computations
View source
computations(
eval_config: Optional[tfma.EvalConfig
] = None,
schema: Optional[schema_pb2.Schema] = None,
model_names: Optional[List[str]] = None,
output_names: Optional[List[str]] = None,
sub_keys: Optional[List[Optional[SubKey]]] = None,
aggregation_type: Optional[AggregationType] = None,
class_weights: Optional[Dict[int, float]] = None,
example_weighted: bool = False,
query_key: Optional[str] = None
) -> tfma.metrics.MetricComputations
Creates computations associated with metric.
from_config
View source
@classmethod
from_config(
config: Dict[str, Any]
) -> 'Metric'
get_config
View source
get_config() -> Dict[str, Any]
Returns serializable config.
result
View source
result(
tp: float, tn: float, fp: float, fn: float
) -> float
Function for computing metric value from TP, TN, FP, FN values.