tf.keras.losses.cosine_similarity
Stay organized with collections
Save and categorize content based on your preferences.
Computes the cosine similarity between labels and predictions.
View aliases
Main aliases
tf.losses.cosine_similarity
Compat aliases for migration
See
Migration guide for
more details.
`tf.compat.v1.keras.losses.cosine`, `tf.compat.v1.keras.losses.cosine_proximity`, `tf.compat.v1.keras.losses.cosine_similarity`, `tf.compat.v1.keras.metrics.cosine`, `tf.compat.v1.keras.metrics.cosine_proximity`
tf.keras.losses.cosine_similarity(
y_true, y_pred, axis=-1
)
Note that it is a number between -1 and 1. When it is a negative number
between -1 and 0, 0 indicates orthogonality and values closer to -1
indicate greater similarity. The values closer to 1 indicate greater
dissimilarity. This makes it usable as a loss function in a setting
where you try to maximize the proximity between predictions and
targets. If either y_true
or y_pred
is a zero vector, cosine
similarity will be 0 regardless of the proximity between predictions
and targets.
loss = -sum(l2_norm(y_true) * l2_norm(y_pred))
Standalone usage:
y_true = [[0., 1.], [1., 1.], [1., 1.]]
y_pred = [[1., 0.], [1., 1.], [-1., -1.]]
loss = tf.keras.losses.cosine_similarity(y_true, y_pred, axis=1)
loss.numpy()
array([-0., -0.999, 0.999], dtype=float32)
Args |
y_true
|
Tensor of true targets.
|
y_pred
|
Tensor of predicted targets.
|
axis
|
Axis along which to determine similarity.
|
Returns |
Cosine similarity tensor.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2023-10-06 UTC.
[null,null,["Last updated 2023-10-06 UTC."],[],[],null,["# tf.keras.losses.cosine_similarity\n\n\u003cbr /\u003e\n\n|-------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/keras-team/keras/tree/v2.12.0/keras/losses.py#L2420-L2463) |\n\nComputes the cosine similarity between labels and predictions.\n\n#### View aliases\n\n\n**Main aliases**\n\n[`tf.losses.cosine_similarity`](https://www.tensorflow.org/api_docs/python/tf/keras/losses/cosine_similarity)\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n\\`tf.compat.v1.keras.losses.cosine\\`, \\`tf.compat.v1.keras.losses.cosine_proximity\\`, \\`tf.compat.v1.keras.losses.cosine_similarity\\`, \\`tf.compat.v1.keras.metrics.cosine\\`, \\`tf.compat.v1.keras.metrics.cosine_proximity\\`\n\n\u003cbr /\u003e\n\n tf.keras.losses.cosine_similarity(\n y_true, y_pred, axis=-1\n )\n\nNote that it is a number between -1 and 1. When it is a negative number\nbetween -1 and 0, 0 indicates orthogonality and values closer to -1\nindicate greater similarity. The values closer to 1 indicate greater\ndissimilarity. This makes it usable as a loss function in a setting\nwhere you try to maximize the proximity between predictions and\ntargets. If either `y_true` or `y_pred` is a zero vector, cosine\nsimilarity will be 0 regardless of the proximity between predictions\nand targets.\n\n`loss = -sum(l2_norm(y_true) * l2_norm(y_pred))`\n\n#### Standalone usage:\n\n y_true = [[0., 1.], [1., 1.], [1., 1.]]\n y_pred = [[1., 0.], [1., 1.], [-1., -1.]]\n loss = tf.keras.losses.cosine_similarity(y_true, y_pred, axis=1)\n loss.numpy()\n array([-0., -0.999, 0.999], dtype=float32)\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|----------|-------------------------------------------|\n| `y_true` | Tensor of true targets. |\n| `y_pred` | Tensor of predicted targets. |\n| `axis` | Axis along which to determine similarity. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| Cosine similarity tensor. ||\n\n\u003cbr /\u003e"]]