tf.contrib.metrics.auc_with_confidence_intervals
Stay organized with collections
Save and categorize content based on your preferences.
Computes the AUC and asymptotic normally distributed confidence interval.
tf.contrib.metrics.auc_with_confidence_intervals(
labels, predictions, weights=None, alpha=0.95, logit_transformation=True,
metrics_collections=(), updates_collections=(), name=None
)
USAGE NOTE: this approach requires storing all of the predictions and labels
for a single evaluation in memory, so it may not be usable when the evaluation
batch size and/or the number of evaluation steps is very large.
Computes the area under the ROC curve and its confidence interval using
placement values. This has the advantage of being resilient to the
distribution of predictions by aggregating across batches, accumulating labels
and predictions and performing the final calculation using all of the
concatenated values.
Args |
labels
|
A Tensor of ground truth labels with the same shape as labels
and with values of 0 or 1 whose values are castable to int64 .
|
predictions
|
A Tensor of predictions whose values are castable to
float64 . Will be flattened into a 1-D Tensor .
|
weights
|
Optional Tensor whose rank is either 0, or the same rank as
labels .
|
alpha
|
Confidence interval level desired.
|
logit_transformation
|
A boolean value indicating whether the estimate should
be logit transformed prior to calculating the confidence interval. Doing
so enforces the restriction that the AUC should never be outside the
interval [0,1].
|
metrics_collections
|
An optional iterable of collections that auc should
be added to.
|
updates_collections
|
An optional iterable of collections that update_op
should be added to.
|
name
|
An optional name for the variable_scope that contains the metric
variables.
|
Returns |
auc
|
A 1-D Tensor containing the current area-under-curve, lower, and
upper confidence interval values.
|
update_op
|
An operation that concatenates the input labels and predictions
to the accumulated values.
|
Raises |
ValueError
|
If labels , predictions , and weights have mismatched shapes
or if alpha isn't in the range (0,1).
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2020-10-01 UTC.
[null,null,["Last updated 2020-10-01 UTC."],[],[],null,["# tf.contrib.metrics.auc_with_confidence_intervals\n\n\u003cbr /\u003e\n\n|------------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/contrib/metrics/python/ops/metric_ops.py#L1400-L1516) |\n\nComputes the AUC and asymptotic normally distributed confidence interval. \n\n tf.contrib.metrics.auc_with_confidence_intervals(\n labels, predictions, weights=None, alpha=0.95, logit_transformation=True,\n metrics_collections=(), updates_collections=(), name=None\n )\n\nUSAGE NOTE: this approach requires storing all of the predictions and labels\nfor a single evaluation in memory, so it may not be usable when the evaluation\nbatch size and/or the number of evaluation steps is very large.\n\nComputes the area under the ROC curve and its confidence interval using\nplacement values. This has the advantage of being resilient to the\ndistribution of predictions by aggregating across batches, accumulating labels\nand predictions and performing the final calculation using all of the\nconcatenated values.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `labels` | A `Tensor` of ground truth labels with the same shape as `labels` and with values of 0 or 1 whose values are castable to `int64`. |\n| `predictions` | A `Tensor` of predictions whose values are castable to `float64`. Will be flattened into a 1-D `Tensor`. |\n| `weights` | Optional `Tensor` whose rank is either 0, or the same rank as `labels`. |\n| `alpha` | Confidence interval level desired. |\n| `logit_transformation` | A boolean value indicating whether the estimate should be logit transformed prior to calculating the confidence interval. Doing so enforces the restriction that the AUC should never be outside the interval \\[0,1\\]. |\n| `metrics_collections` | An optional iterable of collections that `auc` should be added to. |\n| `updates_collections` | An optional iterable of collections that `update_op` should be added to. |\n| `name` | An optional name for the variable_scope that contains the metric variables. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|-------------|------------------------------------------------------------------------------------------------------|\n| `auc` | A 1-D `Tensor` containing the current area-under-curve, lower, and upper confidence interval values. |\n| `update_op` | An operation that concatenates the input labels and predictions to the accumulated values. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|--------------|----------------------------------------------------------------------------------------------------------|\n| `ValueError` | If `labels`, `predictions`, and `weights` have mismatched shapes or if `alpha` isn't in the range (0,1). |\n\n\u003cbr /\u003e"]]