tf.keras.metrics.OneHotMeanIoU

Computes mean Intersection-Over-Union metric for one-hot encoded labels.

Inherits From: MeanIoU, IoU, Metric, Layer, Module

General definition and computation:

Intersection-Over-Union is a common evaluation metric for semantic image segmentation.

For an individual class, the IoU metric is defined as follows:

iou = true_positives / (true_positives + false_positives + false_negatives)

To compute IoUs, the predictions are accumulated in a confusion matrix, weighted by sample_weight and the metric is then calculated from it.

If sample_weight is None, weights default to 1. Use sample_weight of 0 to mask values.

This class can be used to compute the mean IoU for multi-class classification tasks where the labels are one-hot encoded (the last axis should have one dimension per class). Note that the predictions should also have the same shape. To compute the mean IoU, first the labels and predictions are converted back into integer format by taking the argmax over the class axis. Then the same computation steps as for the base MeanIoU class apply.

Note, if there is only one channel in the labels and predictions, this class is the same as class MeanIoU. In this case, use MeanIoU instead.

Also, make sure that num_classes is equal to the number of classes in the data, to avoid a "labels out of bound" error when the confusion matrix is computed.

num_classes The possible number of labels the prediction task can have. A confusion matrix of shape (num_classes, num_classes) will be allocated to accumulate predictions from which the metric is calculated.
name (Optional) string name of the metric instance.
dtype (Optional) data type of the metric result.

Standalone usage:

y_true = tf.constant([[0, 0, 1], [1, 0, 0], [0, 1, 0], [1, 0, 0]])
y_pred = tf.constant([[0.2, 0.3, 0.5], [0.1, 0.2, 0.7], [0.5, 0.3, 0.1],
                      [0.1, 0.4, 0.5]])
sample_weight = [0.1, 0.2, 0.3, 0.4]
m = tf.keras.metrics.OneHotMeanIoU(num_classes=3)
m.update_state(y_true=y_true, y_pred=y_pred, sample_weight=sample_weight)
# cm = [[0, 0, 0.2+0.4],
#       [0.3, 0, 0],
#       [0, 0, 0.1]]
# sum_row = [0.3, 0, 0.7], sum_col = [0.6, 0.3, 0.1]
# true_positives = [0, 0, 0.1]
# single_iou = true_positives / (sum_row + sum_col - true_positives))
# mean_iou = (0 + 0 + 0.1 / (0.7 + 0.1 - 0.1)) / 3
m.result().numpy()
0.048

Usage with compile() API:

model.compile(
  optimizer='sgd',
  loss='mse',
  metrics=[tf.keras.metrics.OneHotMeanIoU(num_classes=3)])

Methods

merge_state

View source

Merges the state from one or more metrics.

This method can be used by distributed systems to merge the state computed by different metric instances. Typically the state will be stored in the form of the metric's weights. For example, a tf.keras.metrics.Mean metric contains a list of two weight values: a total and a count. If there were two instances of a tf.keras.metrics.Accuracy that each independently aggregated partial state for an overall accuracy calculation, these two metric's states could be combined as follows:

m1 = tf.keras.metrics.Accuracy()
_ = m1.update_state([[1], [2]], [[0], [2]])
m2 = tf.keras.metrics.Accuracy()
_ = m2.update_state([[3], [4]], [[3], [4]])
m2.merge_state([m1])
m2.result().numpy()
0.75

Args
metrics an iterable of metrics. The metrics must have compatible state.

Raises
ValueError If the provided iterable does not contain metrics matching the metric's required specifications.

reset_state

View source

Resets all of the metric state variables.

This function is called between epochs/steps, when a metric is evaluated during training.

result

View source

Compute the intersection-over-union via the confusion matrix.

update_state

View source

Accumulates the confusion matrix statistics.

Args
y_true The ground truth values.
y_pred The predicted values.
sample_weight Optional weighting of each example. Defaults to 1. Can be a Tensor whose rank is either 0, or the same rank as y_true, and must be broadcastable to y_true.

Returns
Update op.