|  View source on GitHub | 
Creates a Head for logistic regression. (deprecated)
Inherits From: RegressionHead, Head
tf.estimator.LogisticRegressionHead(
    weight_column=None,
    loss_reduction=tf.losses.Reduction.SUM_OVER_BATCH_SIZE,
    name=None
)
Uses sigmoid_cross_entropy_with_logits loss, which is the same as
BinaryClassHead. The differences compared to BinaryClassHead are:
- Does not support label_vocabulary. Instead, labels must be float in the range [0, 1].
- Does not calculate some metrics that do not make sense, such as AUC.
- In PREDICTmode, only returns logits and predictions (=tf.sigmoid(logits)), whereasBinaryClassHeadalso returns probabilities, classes, and class_ids.
- Export output defaults to RegressionOutput, whereasBinaryClassHeaddefaults toPredictOutput.
The head expects logits with shape [D0, D1, ... DN, 1].
In many applications, the shape is [batch_size, 1].
The labels shape must match logits, namely
[D0, D1, ... DN] or [D0, D1, ... DN, 1].
If weight_column is specified, weights must be of shape
[D0, D1, ... DN] or [D0, D1, ... DN, 1].
This is implemented as a generalized linear model, see https://en.wikipedia.org/wiki/Generalized_linear_model
The head can be used with a canned estimator. Example:
my_head = tf.estimator.LogisticRegressionHead()
my_estimator = tf.estimator.DNNEstimator(
    head=my_head,
    hidden_units=...,
    feature_columns=...)
It can also be used with a custom model_fn. Example:
def _my_model_fn(features, labels, mode):
  my_head = tf.estimator.LogisticRegressionHead()
  logits = tf.keras.Model(...)(features)
  return my_head.create_estimator_spec(
      features=features,
      mode=mode,
      labels=labels,
      optimizer=tf.keras.optimizers.Adagrad(lr=0.1),
      logits=logits)
my_estimator = tf.estimator.Estimator(model_fn=_my_model_fn)
| Args | |
|---|---|
| weight_column | A string or a NumericColumncreated bytf.feature_column.numeric_columndefining feature column representing
weights. It is used to down weight or boost examples during training. It
will be multiplied by the loss of the example. | 
| loss_reduction | One of tf.losses.ReductionexceptNONE. Decides how to
reduce training loss over batch and label dimension. Defaults toSUM_OVER_BATCH_SIZE, namely weighted sum of losses divided bybatch
size * label_dimension. | 
| name | name of the head. If provided, summary and metrics keys will be
suffixed by "/" + name. Also used asname_scopewhen creating ops. | 
| Attributes | |
|---|---|
| logits_dimension | See base_head.Headfor details. | 
| loss_reduction | See base_head.Headfor details. | 
| name | See base_head.Headfor details. | 
Methods
create_estimator_spec
create_estimator_spec(
    features,
    mode,
    logits,
    labels=None,
    optimizer=None,
    trainable_variables=None,
    train_op_fn=None,
    update_ops=None,
    regularization_losses=None
)
Returns EstimatorSpec that a model_fn can return.
It is recommended to pass all args via name.
| Args | |
|---|---|
| features | Input dictmapping string feature names toTensororSparseTensorobjects containing the values for that feature in a
minibatch. Often to be used to fetch example-weight tensor. | 
| mode | Estimator's ModeKeys. | 
| logits | Logits Tensorto be used by the head. | 
| labels | Labels Tensor, ordictmapping string label names toTensorobjects of the label values. | 
| optimizer | An tf.keras.optimizers.Optimizerinstance to optimize the
loss in TRAIN mode. Namely, setstrain_op = optimizer.get_updates(loss,
trainable_variables), which updates variables to minimizeloss. | 
| trainable_variables | A list or tuple of Variableobjects to update to
minimizeloss. In Tensorflow 1.x, by default these are the list of
variables collected in the graph under the keyGraphKeys.TRAINABLE_VARIABLES. As Tensorflow 2.x doesn't have
collections and GraphKeys, trainable_variables need to be passed
explicitly here. | 
| train_op_fn | Function that takes a scalar loss Tensorand returns an op
to optimize the model with the loss in TRAIN mode. Used ifoptimizerisNone. Exactly one oftrain_op_fnandoptimizermust be set in
TRAIN mode. By default, it isNonein other modes. If you want to
optimize loss yourself, you can passlambda _: tf.no_op()and then useEstimatorSpec.lossto compute and apply gradients. | 
| update_ops | A list or tuple of update ops to be run at training time. For example, layers such as BatchNormalization create mean and variance update ops that need to be run at training time. In Tensorflow 1.x, these are thrown into an UPDATE_OPS collection. As Tensorflow 2.x doesn't have collections, update_ops need to be passed explicitly here. | 
| regularization_losses | A list of additional scalar losses to be added to the training loss, such as regularization losses. | 
| Returns | |
|---|---|
| EstimatorSpec. | 
loss
loss(
    labels, logits, features=None, mode=None, regularization_losses=None
)
Return predictions based on keys. See base_head.Head for details.
metrics
metrics(
    regularization_losses=None
)
Creates metrics. See base_head.Head for details.
predictions
predictions(
    logits
)
Return predictions based on keys.
See base_head.Head for details.
| Args | |
|---|---|
| logits | logits Tensorwith shape[D0, D1, ... DN, logits_dimension].
For many applications, the shape is[batch_size, logits_dimension]. | 
| Returns | |
|---|---|
| A dict of predictions. | 
update_metrics
update_metrics(
    eval_metrics, features, logits, labels, regularization_losses=None
)
Updates eval metrics. See base_head.Head for details.