![]() |
Positive likelihood ratio (LR+).
Inherits From: Metric
tfma.metrics.PositiveLikelihoodRatio(
thresholds: Optional[Union[float, List[float]]] = None,
name: Optional[str] = None,
top_k: Optional[int] = None,
class_id: Optional[int] = None
)
Args | |
---|---|
thresholds
|
(Optional) Thresholds to use. Defaults to [0.5]. |
name
|
(Optional) Metric name. |
top_k
|
(Optional) Used with a multi-class model to specify that the top-k values should be used to compute the confusion matrix. The net effect is that the non-top-k values are set to -inf and the matrix is then constructed from the average TP, FP, TN, FN across the classes. When top_k is used, metrics_specs.binarize settings must not be present. Only one of class_id or top_k should be configured. |
class_id
|
(Optional) Used with a multi-class model to specify which class to compute the confusion matrix for. When class_id is used, metrics_specs.binarize settings must not be present. Only one of class_id or top_k should be configured. |
Attributes | |
---|---|
compute_confidence_interval
|
Whether to compute confidence intervals for this metric.
Note that this may not completely remove the computational overhead involved in computing a given metric. This is only respected by the jackknife confidence interval method. |
Methods
computations
computations(
eval_config: Optional[tfma.EvalConfig
] = None,
schema: Optional[schema_pb2.Schema] = None,
model_names: Optional[List[str]] = None,
output_names: Optional[List[str]] = None,
sub_keys: Optional[List[Optional[SubKey]]] = None,
aggregation_type: Optional[AggregationType] = None,
class_weights: Optional[Dict[int, float]] = None,
example_weighted: bool = False,
query_key: Optional[str] = None
) -> tfma.metrics.MetricComputations
Creates computations associated with metric.
get_config
get_config() -> Dict[str, Any]
Returns serializable config.
result
result(
tp: float, tn: float, fp: float, fn: float
) -> float
Function for computing metric value from TP, TN, FP, FN values.