tfg.nn.metric.fscore.evaluate
Stay organized with collections
Save and categorize content based on your preferences.
Computes the fscore metric for the given ground truth and predicted labels.
tfg.nn.metric.fscore.evaluate(
ground_truth: type_alias.TensorLike,
prediction: type_alias.TensorLike,
precision_function: Callable[..., Any] = tfg.nn.metric.precision.evaluate
,
recall_function: Callable[..., Any] = tfg.nn.metric.recall.evaluate
,
name: str = 'fscore_evaluate'
) -> tf.Tensor
The fscore is calculated as 2 * (precision * recall) / (precision + recall)
where the precision and recall are evaluated by the given function parameters.
The precision and recall functions default to their definition for boolean
labels (see https://en.wikipedia.org/wiki/Precision_and_recall for more
details).
Note |
In the following, A1 to An are optional batch dimensions, which must be
broadcast compatible.
|
Args |
ground_truth
|
A tensor of shape [A1, ..., An, N] , where the last axis
represents the ground truth values.
|
prediction
|
A tensor of shape [A1, ..., An, N] , where the last axis
represents the predicted values.
|
precision_function
|
The function to use for evaluating the precision.
Defaults to the precision evaluation for binary ground-truth and
predictions.
|
recall_function
|
The function to use for evaluating the recall. Defaults to
the recall evaluation for binary ground-truth and prediction.
|
name
|
A name for this op. Defaults to "fscore_evaluate".
|
Returns |
A tensor of shape [A1, ..., An] that stores the fscore metric for the
given ground truth labels and predictions.
|
Raises |
ValueError
|
if the shape of ground_truth , prediction is
not supported.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2022-10-28 UTC.
[null,null,["Last updated 2022-10-28 UTC."],[],[],null,["# tfg.nn.metric.fscore.evaluate\n\n\u003cbr /\u003e\n\n|-----------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/graphics/blob/master/tensorflow_graphics/nn/metric/fscore.py#L32-L82) |\n\nComputes the fscore metric for the given ground truth and predicted labels. \n\n tfg.nn.metric.fscore.evaluate(\n ground_truth: type_alias.TensorLike,\n prediction: type_alias.TensorLike,\n precision_function: Callable[..., Any] = ../../../../tfg/nn/metric/precision/evaluate,\n recall_function: Callable[..., Any] = ../../../../tfg/nn/metric/recall/evaluate,\n name: str = 'fscore_evaluate'\n ) -\u003e tf.Tensor\n\nThe fscore is calculated as 2 \\* (precision \\* recall) / (precision + recall)\nwhere the precision and recall are evaluated by the given function parameters.\nThe precision and recall functions default to their definition for boolean\nlabels (see \u003chttps://en.wikipedia.org/wiki/Precision_and_recall\u003e for more\ndetails).\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Note ---- ||\n|---|---|\n| In the following, A1 to An are optional batch dimensions, which must be broadcast compatible. ||\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|----------------------|---------------------------------------------------------------------------------------------------------------------------------|\n| `ground_truth` | A tensor of shape `[A1, ..., An, N]`, where the last axis represents the ground truth values. |\n| `prediction` | A tensor of shape `[A1, ..., An, N]`, where the last axis represents the predicted values. |\n| `precision_function` | The function to use for evaluating the precision. Defaults to the precision evaluation for binary ground-truth and predictions. |\n| `recall_function` | The function to use for evaluating the recall. Defaults to the recall evaluation for binary ground-truth and prediction. |\n| `name` | A name for this op. Defaults to \"fscore_evaluate\". |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| A tensor of shape `[A1, ..., An]` that stores the fscore metric for the given ground truth labels and predictions. ||\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|--------------|----------------------------------------------------------------|\n| `ValueError` | if the shape of `ground_truth`, `prediction` is not supported. |\n\n\u003cbr /\u003e"]]