tf.contrib.losses.metric_learning.contrastive_loss
Computes the contrastive loss.
tf.contrib.losses.metric_learning.contrastive_loss(
labels, embeddings_anchor, embeddings_positive, margin=1.0
)
This loss encourages the embedding to be close to each other for
the samples of the same label and the embedding to be far apart at least
by the margin constant for the samples of different labels.
See: http://yann.lecun.com/exdb/publis/pdf/hadsell-chopra-lecun-06.pdf
Args |
labels
|
1-D tf.int32 Tensor with shape [batch_size] of
binary labels indicating positive vs negative pair.
|
embeddings_anchor
|
2-D float Tensor of embedding vectors for the anchor
images. Embeddings should be l2 normalized.
|
embeddings_positive
|
2-D float Tensor of embedding vectors for the
positive images. Embeddings should be l2 normalized.
|
margin
|
margin term in the loss definition.
|
Returns |
contrastive_loss
|
tf.float32 scalar.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2020-10-01 UTC.
[null,null,["Last updated 2020-10-01 UTC."],[],[]]