View source on GitHub |
Maximum Mean Discrepancy between predictions on two groups of examples.
Inherits From: MinDiffLoss
model_remediation.min_diff.losses.MMDLoss(
kernel='gaussian',
predictions_transform=None,
name: Optional[str] = None,
enable_summary_histogram: Optional[bool] = True
)
Arguments | |
---|---|
kernel
|
String (name of kernel) or losses.MinDiffKernel instance to be
applied on the predictions. Defaults to 'gaussian' and it is recommended
that this be either
'gaussian'
(min_diff.losses.GaussianKernel ) or 'laplacian'
(min_diff.losses.LaplacianKernel ).
|
predictions_transform
|
Optional transform function to be applied to the
predictions. This can be used to smooth out the distributions or limit the
range of predictions.
The choice of whether to apply a transform to the predictions is task and
data dependent. For example, for classifiers, it might make sense to apply
a |
name
|
Name used for logging and tracking. Defaults to 'mmd_loss' .
|
enable_summary_histogram
|
Optional bool indicating if tf.summary.histogram
should be included within the loss. Defaults to True.
|
The Maximum Mean Discrepancy (MMD) is a measure of the distance between the distributions of prediction scores on two groups of examples. The metric guarantees that the result is 0 if and only if the two distributions it is comparing are exactly the same.
The membership
input indicates with a numerical value whether
each example is part of the sensitive group with a numerical value. This
currently only supports hard membership of 0.0
or 1.0
.
For more details, see the paper.