# Using ragged tensorstf.random.set_seed(42)y_true=tf.ragged.constant([[1.,0.],[0.,1.,0.]])y_pred=tf.ragged.constant([[0.6,0.8],[0.5,0.8,0.4]])loss=tfr.keras.losses.ListMLELoss(ragged=True)loss(y_true,y_pred).numpy()1.1613163
where \(P(\pi_y | s)\) is the Plackett-Luce probability of a permutation
\(\pi_y\) conditioned on scores \(s\). Here \(\pi_y\) represents a permutation
of items ordered by the relevance labels \(y\) where ties are broken randomly.
[null,null,["Last updated 2023-08-18 UTC."],[],[],null,["# tfr.keras.losses.ListMLELoss\n\n|--------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/ranking/blob/v0.5.3/tensorflow_ranking/python/keras/losses.py#L894-L976) |\n\nComputes ListMLE loss between `y_true` and `y_pred`. \n\n tfr.keras.losses.ListMLELoss(\n reduction: tf.losses.Reduction = tf.losses.Reduction.AUTO,\n name: Optional[str] = None,\n lambda_weight: Optional[losses_impl._LambdaWeight] = None,\n temperature: float = 1.0,\n ragged: bool = False\n )\n\nImplements ListMLE loss ([Xia et al, 2008](https://dl.acm.org/doi/10.1145/1390156.1390306)). For each list of scores\n`s` in `y_pred` and list of labels `y` in `y_true`: \n\n loss = - log P(permutation_y | s)\n P(permutation_y | s) = Plackett-Luce probability of permutation_y given s\n permutation_y = permutation of items sorted by labels y.\n\n| **Note:** This loss is stochastic and may return different values for identical inputs.\n\n#### Standalone usage:\n\n tf.random.set_seed(42)\n y_true = [[1., 0.]]\n y_pred = [[0.6, 0.8]]\n loss = tfr.keras.losses.ListMLELoss()\n loss(y_true, y_pred).numpy()\n 0.7981389\n\n # Using ragged tensors\n tf.random.set_seed(42)\n y_true = tf.ragged.constant([[1., 0.], [0., 1., 0.]])\n y_pred = tf.ragged.constant([[0.6, 0.8], [0.5, 0.8, 0.4]])\n loss = tfr.keras.losses.ListMLELoss(ragged=True)\n loss(y_true, y_pred).numpy()\n 1.1613163\n\nUsage with the `compile()` API: \n\n model.compile(optimizer='sgd', loss=tfr.keras.losses.ListMLELoss())\n\n#### Definition:\n\n\\\\\\[\n\\\\mathcal{L}(\\\\{y\\\\}, \\\\{s\\\\}) = - \\\\log(P(\\\\pi_y \\| s))\n\\\\\\]\n\nwhere \\\\(P(\\\\pi_y \\| s)\\\\) is the Plackett-Luce probability of a permutation\n\\\\(\\\\pi_y\\\\) conditioned on scores \\\\(s\\\\). Here \\\\(\\\\pi_y\\\\) represents a permutation\nof items ordered by the relevance labels \\\\(y\\\\) where ties are broken randomly.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| References ---------- ||\n|---|---|\n| \u003cbr /\u003e - [Listwise approach to learning to rank: theory and algorithm, Xia et al, 2008](https://dl.acm.org/doi/10.1145/1390156.1390306) ||\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `reduction` | (Optional) The [`tf.keras.losses.Reduction`](https://www.tensorflow.org/api_docs/python/tf/keras/losses/Reduction) to use (see [`tf.keras.losses.Loss`](https://www.tensorflow.org/api_docs/python/tf/keras/losses/Loss)). |\n| `name` | (Optional) The name for the op. |\n| `lambda_weight` | (Optional) A lambdaweight to apply to the loss. Can be one of [`tfr.keras.losses.DCGLambdaWeight`](../../../tfr/keras/losses/DCGLambdaWeight), [`tfr.keras.losses.NDCGLambdaWeight`](../../../tfr/keras/losses/NDCGLambdaWeight), [`tfr.keras.losses.PrecisionLambdaWeight`](../../../tfr/keras/losses/PrecisionLambdaWeight), or, [`tfr.keras.losses.ListMLELambdaWeight`](../../../tfr/keras/losses/ListMLELambdaWeight). |\n| `temperature` | (Optional) The temperature to use for scaling the logits. |\n| `ragged` | (Optional) If True, this loss will accept ragged tensors. If False, this loss will accept dense tensors. |\n\n\u003cbr /\u003e\n\nMethods\n-------\n\n### `from_config`\n\n[View source](https://github.com/tensorflow/ranking/blob/v0.5.3/tensorflow_ranking/python/keras/losses.py#L742-L752) \n\n @classmethod\n from_config(\n config, custom_objects=None\n )\n\nInstantiates a `Loss` from its config (output of `get_config()`).\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ||\n|----------|---------------------------|\n| `config` | Output of `get_config()`. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| A `Loss` instance. ||\n\n\u003cbr /\u003e\n\n### `get_config`\n\n[View source](https://github.com/tensorflow/ranking/blob/v0.5.3/tensorflow_ranking/python/keras/losses.py#L732-L740) \n\n get_config() -\u003e Dict[str, Any]\n\nReturns the config dictionary for a `Loss` instance.\n\n### `__call__`\n\n[View source](https://github.com/tensorflow/ranking/blob/v0.5.3/tensorflow_ranking/python/keras/losses.py#L262-L270) \n\n __call__(\n y_true: ../../../tfr/keras/model/TensorLike,\n y_pred: ../../../tfr/keras/model/TensorLike,\n sample_weight: Optional[utils.TensorLike] = None\n ) -\u003e tf.Tensor\n\nSee tf.keras.losses.Loss."]]