tf.keras.losses.cosine_similarity
Stay organized with collections
Save and categorize content based on your preferences.
Computes the cosine similarity between labels and predictions.
tf.keras.losses.cosine_similarity(
y_true, y_pred, axis=-1
)
loss = -sum(l2_norm(y_true) * l2_norm(y_pred))
Note that it is a number between -1 and 1. When it is a negative number
between -1 and 0, 0 indicates orthogonality and values closer to -1
indicate greater similarity. This makes it usable as a loss function in a
setting where you try to maximize the proximity between predictions and
targets. If either y_true
or y_pred
is a zero vector, cosine
similarity will be 0 regardless of the proximity between predictions
and targets.
Args |
y_true
|
Tensor of true targets.
|
y_pred
|
Tensor of predicted targets.
|
axis
|
Axis along which to determine similarity. Defaults to -1 .
|
Returns |
Cosine similarity tensor.
|
Example:
y_true = [[0., 1.], [1., 1.], [1., 1.]]
y_pred = [[1., 0.], [1., 1.], [-1., -1.]]
loss = keras.losses.cosine_similarity(y_true, y_pred, axis=-1)
[-0., -0.99999994, 0.99999994]
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2024-06-07 UTC.
[null,null,["Last updated 2024-06-07 UTC."],[],[],null,["# tf.keras.losses.cosine_similarity\n\n\u003cbr /\u003e\n\n|-----------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/losses/losses.py#L1291-L1328) |\n\nComputes the cosine similarity between labels and predictions. \n\n tf.keras.losses.cosine_similarity(\n y_true, y_pred, axis=-1\n )\n\n#### Formula:\n\n loss = -sum(l2_norm(y_true) * l2_norm(y_pred))\n\nNote that it is a number between -1 and 1. When it is a negative number\nbetween -1 and 0, 0 indicates orthogonality and values closer to -1\nindicate greater similarity. This makes it usable as a loss function in a\nsetting where you try to maximize the proximity between predictions and\ntargets. If either `y_true` or `y_pred` is a zero vector, cosine\nsimilarity will be 0 regardless of the proximity between predictions\nand targets.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|----------|-------------------------------------------------------------|\n| `y_true` | Tensor of true targets. |\n| `y_pred` | Tensor of predicted targets. |\n| `axis` | Axis along which to determine similarity. Defaults to `-1`. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| Cosine similarity tensor. ||\n\n\u003cbr /\u003e\n\n#### Example:\n\n y_true = [[0., 1.], [1., 1.], [1., 1.]]\n y_pred = [[1., 0.], [1., 1.], [-1., -1.]]\n loss = keras.losses.cosine_similarity(y_true, y_pred, axis=-1)\n [-0., -0.99999994, 0.99999994]"]]