Help protect the Great Barrier Reef with TensorFlow on Kaggle Join Challenge


Maximum Mean Discrepency between predictions on two groups of examples.

Inherits From: MinDiffLoss

kernel String (name of kernel) or losses.MinDiffKernel instance to be applied on the predictions. Defaults to 'gaussian' and it is recommended that this be either 'gaussian' (min_diff.losses.GaussianKernel) or 'laplacian' (min_diff.losses.LaplacianKernel).
predictions_transform Optional transform function to be applied to the predictions. This can be used to smooth out the distributions or limit the range of predictions.

The choice of whether to apply a transform to the predictions is task and data dependent. For example, for classifiers, it might make sense to apply a tf.sigmoid transform to the predictions (if this is not done already) so that MMD is calculated in probability space rather than on raw predictions. In some cases, such as regression, not having any transform is more likely to yield successful results.

name Name used for logging and tracking. Defaults to 'mmd_loss'.

The Maximum Mean Discrepancy (MMD) is a measure of the distance between the distributions of prediction scores on two groups of examples. The metric guarantees that the result is 0 if and only if the two distributions it is comparing are exactly the same.

The membership input indicates with a numerical value whether each example is part of the sensitive group with a numerical value. This currently only supports hard membership of 0.0 or 1.0.

For more details, see the paper.