View source on GitHub
|
Computes Kullback-Leibler divergence metric between y_true and y_pred.
Inherits From: MeanMetricWrapper, Mean, Metric
tf.keras.metrics.KLDivergence(
name='kl_divergence', dtype=None
)
Formula:
metric = y_true * log(y_true / y_pred)
y_true and y_pred are expected to be probability
distributions, with values between 0 and 1. They will get
clipped to the [0, 1] range.
Args | |
|---|---|
name
|
(Optional) string name of the metric instance. |
dtype
|
(Optional) data type of the metric result. |
Examples:
m = keras.metrics.KLDivergence()m.update_state([[0, 1], [0, 0]], [[0.6, 0.4], [0.4, 0.6]])m.result()0.45814306
m.reset_state()m.update_state([[0, 1], [0, 0]], [[0.6, 0.4], [0.4, 0.6]],sample_weight=[1, 0])m.result()0.9162892
Usage with compile() API:
model.compile(optimizer='sgd',
loss='mse',
metrics=[keras.metrics.KLDivergence()])
Attributes | |
|---|---|
dtype
|
|
variables
|
|
Methods
add_variable
add_variable(
shape, initializer, dtype=None, aggregation='sum', name=None
)
add_weight
add_weight(
shape=(), initializer=None, dtype=None, name=None
)
from_config
@classmethodfrom_config( config )
get_config
get_config()
Return the serializable config of the metric.
reset_state
reset_state()
Reset all of the metric state variables.
This function is called between epochs/steps, when a metric is evaluated during training.
result
result()
Compute the current metric value.
| Returns | |
|---|---|
| A scalar tensor, or a dictionary of scalar tensors. |
stateless_reset_state
stateless_reset_state()
stateless_result
stateless_result(
metric_variables
)
stateless_update_state
stateless_update_state(
metric_variables, *args, **kwargs
)
update_state
update_state(
y_true, y_pred, sample_weight=None
)
Accumulate statistics for the metric.
__call__
__call__(
*args, **kwargs
)
Call self as a function.
View source on GitHub