tf.keras.metrics.BinaryCrossentropy
Stay organized with collections
Save and categorize content based on your preferences.
Computes the crossentropy metric between the labels and predictions.
tf.keras.metrics.BinaryCrossentropy(
name='binary_crossentropy', dtype=None, from_logits=False, label_smoothing=0
)
This is the crossentropy metric class to be used when there are only two
label classes (0 and 1).
Usage:
m = tf.keras.metrics.BinaryCrossentropy()
_ = m.update_state([[0, 1], [0, 0]], [[0.6, 0.4], [0.4, 0.6]])
m.result().numpy()
0.81492424
m.reset_states()
_ = m.update_state([[0, 1], [0, 0]], [[0.6, 0.4], [0.4, 0.6]],
sample_weight=[1, 0])
m.result().numpy()
0.9162905
Usage with tf.keras API:
model = tf.keras.Model(inputs, outputs)
model.compile(
'sgd',
loss='mse',
metrics=[tf.keras.metrics.BinaryCrossentropy()])
Args |
name
|
(Optional) string name of the metric instance.
|
dtype
|
(Optional) data type of the metric result.
|
from_logits
|
(Optional )Whether output is expected to be a logits tensor.
By default, we consider that output encodes a probability distribution.
|
label_smoothing
|
(Optional) Float in [0, 1]. When > 0, label values are
smoothed, meaning the confidence on label values are relaxed.
e.g. label_smoothing=0.2 means that we will use a value of 0.1 for
label 0 and 0.9 for label 1 "
|
Methods
reset_states
View source
reset_states()
Resets all of the metric state variables.
This function is called between epochs/steps,
when a metric is evaluated during training.
result
View source
result()
Computes and returns the metric value tensor.
Result computation is an idempotent operation that simply calculates the
metric value using the state variables.
update_state
View source
update_state(
y_true, y_pred, sample_weight=None
)
Accumulates metric statistics.
y_true
and y_pred
should have the same shape.
Args |
y_true
|
Ground truth values. shape = [batch_size, d0, .. dN] .
|
y_pred
|
The predicted values. shape = [batch_size, d0, .. dN] .
|
sample_weight
|
Optional sample_weight acts as a
coefficient for the metric. If a scalar is provided, then the metric is
simply scaled by the given value. If sample_weight is a tensor of size
[batch_size] , then the metric for each sample of the batch is rescaled
by the corresponding element in the sample_weight vector. If the shape
of sample_weight is [batch_size, d0, .. dN-1] (or can be broadcasted
to this shape), then each metric element of y_pred is scaled by the
corresponding value of sample_weight . (Note on dN-1 : all metric
functions reduce by 1 dimension, usually the last axis (-1)).
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2020-10-01 UTC.
[null,null,["Last updated 2020-10-01 UTC."],[],[],null,["# tf.keras.metrics.BinaryCrossentropy\n\n\u003cbr /\u003e\n\n|---------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------|\n| [TensorFlow 1 version](/versions/r1.15/api_docs/python/tf/keras/metrics/BinaryCrossentropy) | [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.2.0/tensorflow/python/keras/metrics.py#L2925-L2978) |\n\nComputes the crossentropy metric between the labels and predictions.\n\n#### View aliases\n\n\n**Main aliases**\n\n[`tf.metrics.BinaryCrossentropy`](/api_docs/python/tf/keras/metrics/BinaryCrossentropy)\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n[`tf.compat.v1.keras.metrics.BinaryCrossentropy`](/api_docs/python/tf/keras/metrics/BinaryCrossentropy)\n\n\u003cbr /\u003e\n\n tf.keras.metrics.BinaryCrossentropy(\n name='binary_crossentropy', dtype=None, from_logits=False, label_smoothing=0\n )\n\nThis is the crossentropy metric class to be used when there are only two\nlabel classes (0 and 1).\n\n#### Usage:\n\n m = tf.keras.metrics.BinaryCrossentropy()\n _ = m.update_state([[0, 1], [0, 0]], [[0.6, 0.4], [0.4, 0.6]])\n m.result().numpy()\n 0.81492424\n\n m.reset_states()\n _ = m.update_state([[0, 1], [0, 0]], [[0.6, 0.4], [0.4, 0.6]],\n sample_weight=[1, 0])\n m.result().numpy()\n 0.9162905\n\nUsage with tf.keras API: \n\n model = tf.keras.Model(inputs, outputs)\n model.compile(\n 'sgd',\n loss='mse',\n metrics=[tf.keras.metrics.BinaryCrossentropy()])\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `name` | (Optional) string name of the metric instance. |\n| `dtype` | (Optional) data type of the metric result. |\n| `from_logits` | (Optional )Whether output is expected to be a logits tensor. By default, we consider that output encodes a probability distribution. |\n| `label_smoothing` | (Optional) Float in \\[0, 1\\]. When \\\u003e 0, label values are smoothed, meaning the confidence on label values are relaxed. e.g. `label_smoothing=0.2` means that we will use a value of `0.1` for label `0` and `0.9` for label `1`\" |\n\n\u003cbr /\u003e\n\nMethods\n-------\n\n### `reset_states`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v2.2.0/tensorflow/python/keras/metrics.py#L218-L224) \n\n reset_states()\n\nResets all of the metric state variables.\n\nThis function is called between epochs/steps,\nwhen a metric is evaluated during training.\n\n### `result`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v2.2.0/tensorflow/python/keras/metrics.py#L376-L386) \n\n result()\n\nComputes and returns the metric value tensor.\n\nResult computation is an idempotent operation that simply calculates the\nmetric value using the state variables.\n\n### `update_state`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v2.2.0/tensorflow/python/keras/metrics.py#L574-L605) \n\n update_state(\n y_true, y_pred, sample_weight=None\n )\n\nAccumulates metric statistics.\n\n`y_true` and `y_pred` should have the same shape.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ||\n|-----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `y_true` | Ground truth values. shape = `[batch_size, d0, .. dN]`. |\n| `y_pred` | The predicted values. shape = `[batch_size, d0, .. dN]`. |\n| `sample_weight` | Optional `sample_weight` acts as a coefficient for the metric. If a scalar is provided, then the metric is simply scaled by the given value. If `sample_weight` is a tensor of size `[batch_size]`, then the metric for each sample of the batch is rescaled by the corresponding element in the `sample_weight` vector. If the shape of `sample_weight` is `[batch_size, d0, .. dN-1]` (or can be broadcasted to this shape), then each metric element of `y_pred` is scaled by the corresponding value of `sample_weight`. (Note on `dN-1`: all metric functions reduce by 1 dimension, usually the last axis (-1)). |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| Update op. ||\n\n\u003cbr /\u003e"]]