tfg.nn.metric.fscore.evaluate
Computes the fscore metric for the given ground truth and predicted labels.
tfg.nn.metric.fscore.evaluate(
ground_truth: type_alias.TensorLike,
prediction: type_alias.TensorLike,
precision_function: Callable[..., Any] = tfg.nn.metric.precision.evaluate
,
recall_function: Callable[..., Any] = tfg.nn.metric.recall.evaluate
,
name: str = 'fscore_evaluate'
) -> tf.Tensor
The fscore is calculated as 2 * (precision * recall) / (precision + recall)
where the precision and recall are evaluated by the given function parameters.
The precision and recall functions default to their definition for boolean
labels (see https://en.wikipedia.org/wiki/Precision_and_recall for more
details).
Note |
In the following, A1 to An are optional batch dimensions, which must be
broadcast compatible.
|
Args |
ground_truth
|
A tensor of shape [A1, ..., An, N] , where the last axis
represents the ground truth values.
|
prediction
|
A tensor of shape [A1, ..., An, N] , where the last axis
represents the predicted values.
|
precision_function
|
The function to use for evaluating the precision.
Defaults to the precision evaluation for binary ground-truth and
predictions.
|
recall_function
|
The function to use for evaluating the recall. Defaults to
the recall evaluation for binary ground-truth and prediction.
|
name
|
A name for this op. Defaults to "fscore_evaluate".
|
Returns |
A tensor of shape [A1, ..., An] that stores the fscore metric for the
given ground truth labels and predictions.
|
Raises |
ValueError
|
if the shape of ground_truth , prediction is
not supported.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2022-10-28 UTC.
[null,null,["Last updated 2022-10-28 UTC."],[],[]]