tf.compat.v1.metrics.precision_at_top_k
Stay organized with collections
Save and categorize content based on your preferences.
Computes precision@k of the predictions with respect to sparse labels.
tf.compat.v1.metrics.precision_at_top_k(
labels,
predictions_idx,
k=None,
class_id=None,
weights=None,
metrics_collections=None,
updates_collections=None,
name=None
)
Differs from sparse_precision_at_k
in that predictions must be in the form
of top k
class indices, whereas sparse_precision_at_k
expects logits.
Refer to sparse_precision_at_k
for more details.
Args |
labels
|
int64 Tensor or SparseTensor with shape
[D1, ... DN, num_labels] or [D1, ... DN], where the latter implies
num_labels=1. N >= 1 and num_labels is the number of target classes for
the associated prediction. Commonly, N=1 and labels has shape
[batch_size, num_labels]. [D1, ... DN] must match predictions . Values
should be in range [0, num_classes), where num_classes is the last
dimension of predictions . Values outside this range are ignored.
|
predictions_idx
|
Integer Tensor with shape [D1, ... DN, k] where
N >= 1. Commonly, N=1 and predictions has shape [batch size, k].
The final dimension contains the top k predicted class indices.
[D1, ... DN] must match labels .
|
k
|
Integer, k for @k metric. Only used for the default op name.
|
class_id
|
Integer class ID for which we want binary metrics. This should be
in range [0, num_classes], where num_classes is the last dimension of
predictions . If class_id is outside this range, the method returns
NAN.
|
weights
|
Tensor whose rank is either 0, or n-1, where n is the rank of
labels . If the latter, it must be broadcastable to labels (i.e., all
dimensions must be either 1 , or the same as the corresponding labels
dimension).
|
metrics_collections
|
An optional list of collections that values should
be added to.
|
updates_collections
|
An optional list of collections that updates should
be added to.
|
name
|
Name of new update operation, and namespace for other dependent ops.
|
Returns |
precision
|
Scalar float64 Tensor with the value of true_positives
divided by the sum of true_positives and false_positives .
|
update_op
|
Operation that increments true_positives and
false_positives variables appropriately, and whose value matches
precision .
|
Raises |
ValueError
|
If weights is not None and its shape doesn't match
predictions , or if either metrics_collections or updates_collections
are not a list or tuple.
|
RuntimeError
|
If eager execution is enabled.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2024-04-26 UTC.
[null,null,["Last updated 2024-04-26 UTC."],[],[],null,["# tf.compat.v1.metrics.precision_at_top_k\n\n\u003cbr /\u003e\n\n|----------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.16.1/tensorflow/python/ops/metrics_impl.py#L3609-L3695) |\n\nComputes precision@k of the predictions with respect to sparse labels. \n\n tf.compat.v1.metrics.precision_at_top_k(\n labels,\n predictions_idx,\n k=None,\n class_id=None,\n weights=None,\n metrics_collections=None,\n updates_collections=None,\n name=None\n )\n\nDiffers from `sparse_precision_at_k` in that predictions must be in the form\nof top `k` class indices, whereas `sparse_precision_at_k` expects logits.\nRefer to `sparse_precision_at_k` for more details.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-----------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `labels` | `int64` `Tensor` or `SparseTensor` with shape \\[D1, ... DN, num_labels\\] or \\[D1, ... DN\\], where the latter implies num_labels=1. N \\\u003e= 1 and num_labels is the number of target classes for the associated prediction. Commonly, N=1 and `labels` has shape \\[batch_size, num_labels\\]. \\[D1, ... DN\\] must match `predictions`. Values should be in range \\[0, num_classes), where num_classes is the last dimension of `predictions`. Values outside this range are ignored. |\n| `predictions_idx` | Integer `Tensor` with shape \\[D1, ... DN, k\\] where N \\\u003e= 1. Commonly, N=1 and predictions has shape \\[batch size, k\\]. The final dimension contains the top `k` predicted class indices. \\[D1, ... DN\\] must match `labels`. |\n| `k` | Integer, k for @k metric. Only used for the default op name. |\n| `class_id` | Integer class ID for which we want binary metrics. This should be in range \\[0, num_classes\\], where num_classes is the last dimension of `predictions`. If `class_id` is outside this range, the method returns NAN. |\n| `weights` | `Tensor` whose rank is either 0, or n-1, where n is the rank of `labels`. If the latter, it must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension). |\n| `metrics_collections` | An optional list of collections that values should be added to. |\n| `updates_collections` | An optional list of collections that updates should be added to. |\n| `name` | Name of new update operation, and namespace for other dependent ops. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|-------------|----------------------------------------------------------------------------------------------------------------------------------|\n| `precision` | Scalar `float64` `Tensor` with the value of `true_positives` divided by the sum of `true_positives` and `false_positives`. |\n| `update_op` | `Operation` that increments `true_positives` and `false_positives` variables appropriately, and whose value matches `precision`. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `ValueError` | If `weights` is not `None` and its shape doesn't match `predictions`, or if either `metrics_collections` or `updates_collections` are not a list or tuple. |\n| `RuntimeError` | If eager execution is enabled. |\n\n\u003cbr /\u003e"]]