tfma.metrics.MinLabelPosition
Stay organized with collections
Save and categorize content based on your preferences.
Min label position metric.
Inherits From: Metric
tfma.metrics.MinLabelPosition(
name=MIN_LABEL_POSITION_NAME, label_key: Optional[str] = None
)
Calculates the least index in a query which has a positive label. The final
returned value is the weighted average over all queries in the evaluation set
which have at least one labeled entry. Note, ranking is indexed from one, so
the optimal value for this metric is one. If there are no labeled rows in the
evaluation set, the final output will be zero.
This is a query/ranking based metric so a query_key must also be provided in
the associated metrics spec.
Args |
name
|
Metric name.
|
label_key
|
Optional label key to override default label.
|
Attributes |
compute_confidence_interval
|
Whether to compute confidence intervals for this metric.
Note that this may not completely remove the computational overhead
involved in computing a given metric. This is only respected by the
jackknife confidence interval method.
|
Methods
computations
View source
computations(
eval_config: Optional[tfma.EvalConfig
] = None,
schema: Optional[schema_pb2.Schema] = None,
model_names: Optional[List[str]] = None,
output_names: Optional[List[str]] = None,
sub_keys: Optional[List[Optional[SubKey]]] = None,
aggregation_type: Optional[AggregationType] = None,
class_weights: Optional[Dict[int, float]] = None,
example_weighted: bool = False,
query_key: Optional[str] = None
) -> tfma.metrics.MetricComputations
Creates computations associated with metric.
from_config
View source
@classmethod
from_config(
config: Dict[str, Any]
) -> 'Metric'
get_config
View source
get_config() -> Dict[str, Any]
Returns serializable config.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-04-26 UTC.
[null,null,["Last updated 2024-04-26 UTC."],[],[],null,["# tfma.metrics.MinLabelPosition\n\n\u003cbr /\u003e\n\n|----------------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/model-analysis/blob/v0.46.0/tensorflow_model_analysis/metrics/min_label_position.py#L28-L50) |\n\nMin label position metric.\n\nInherits From: [`Metric`](../../tfma/metrics/Metric) \n\n tfma.metrics.MinLabelPosition(\n name=MIN_LABEL_POSITION_NAME, label_key: Optional[str] = None\n )\n\nCalculates the least index in a query which has a positive label. The final\nreturned value is the weighted average over all queries in the evaluation set\nwhich have at least one labeled entry. Note, ranking is indexed from one, so\nthe optimal value for this metric is one. If there are no labeled rows in the\nevaluation set, the final output will be zero.\n\nThis is a query/ranking based metric so a query_key must also be provided in\nthe associated metrics spec.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-------------|-----------------------------------------------|\n| `name` | Metric name. |\n| `label_key` | Optional label key to override default label. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Attributes ---------- ||\n|-------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `compute_confidence_interval` | Whether to compute confidence intervals for this metric. \u003cbr /\u003e Note that this may not completely remove the computational overhead involved in computing a given metric. This is only respected by the jackknife confidence interval method. |\n\n\u003cbr /\u003e\n\nMethods\n-------\n\n### `computations`\n\n[View source](https://github.com/tensorflow/model-analysis/blob/v0.46.0/tensorflow_model_analysis/metrics/metric_types.py#L862-L888) \n\n computations(\n eval_config: Optional[../../tfma/EvalConfig] = None,\n schema: Optional[schema_pb2.Schema] = None,\n model_names: Optional[List[str]] = None,\n output_names: Optional[List[str]] = None,\n sub_keys: Optional[List[Optional[SubKey]]] = None,\n aggregation_type: Optional[AggregationType] = None,\n class_weights: Optional[Dict[int, float]] = None,\n example_weighted: bool = False,\n query_key: Optional[str] = None\n ) -\u003e ../../tfma/metrics/MetricComputations\n\nCreates computations associated with metric.\n\n### `from_config`\n\n[View source](https://github.com/tensorflow/model-analysis/blob/v0.46.0/tensorflow_model_analysis/metrics/metric_types.py#L842-L847) \n\n @classmethod\n from_config(\n config: Dict[str, Any]\n ) -\u003e 'Metric'\n\n### `get_config`\n\n[View source](https://github.com/tensorflow/model-analysis/blob/v0.46.0/tensorflow_model_analysis/metrics/metric_types.py#L838-L840) \n\n get_config() -\u003e Dict[str, Any]\n\nReturns serializable config."]]