tf.contrib.learn.Head
Stay organized with collections
Save and categorize content based on your preferences.
Interface for the head/top of a model.
THIS CLASS IS DEPRECATED. See
contrib/learn/README.md
for general migration instructions.
Given logits (or output of a hidden layer), a Head knows how to compute
predictions, loss, default metric and export signature. It is meant to,
1) Simplify writing model_fn and to make model_fn more configurable
2) Support wide range of machine learning models. Since most heads can work
with logits, they can support DNN, RNN, Wide, Wide&Deep,
Global objectives, Gradient boosted trees and many other types
of machine learning models.
2) To allow users to seamlessly switch between 1 to n heads for multi
objective learning (See _MultiHead implementation for more details)
Common usage:
Here is simplified model_fn to build a multiclass DNN model.
def _my_dnn_model_fn(features, labels, mode, params, config=None):
# Optionally your callers can pass head to model_fn as a param.
head = tf.contrib.learn.multi_class_head(...)
input = tf.contrib.layers.input_from_feature_columns(features, ...)
last_hidden_layer_out = tf.contrib.layers.stack(
input, tf.contrib.layers.fully_connected, [1000, 500])
logits = tf.contrib.layers.fully_connected(
last_hidden_layer_out, head.logits_dimension, activation_fn=None)
def _train_op_fn(loss):
return optimizer.minimize(loss)
return head.create_model_fn_ops(
features=features,
labels=labels,
mode=mode,
train_op_fn=_train_op_fn,
logits=logits,
scope=...)
Most heads also support logits_input which is typically the output of the last
hidden layer. Some heads (like heads responsible for candidate sampling or
hierarchical softmax) intrinsically will not support logits and you have
to pass logits_input. Here is a common usage,
return head.create_model_fn_ops(
features=features,
labels=labels,
mode=mode,
train_op_fn=_train_op_fn,
logits_input=last_hidden_layer_out,
scope=...)
```python
There are cases where computing and applying gradients can not be meaningfully
captured with train_op_fn we support (for example, with sync optimizer). In
such case, you can take the responsibility on your own. Here is a common
use case,
```python
model_fn_ops = head.create_model_fn_ops(
features=features,
labels=labels,
mode=mode,
train_op_fn=tf.contrib.learn.no_op_train_fn,
logits=logits,
scope=...)
if mode == tf.contrib.learn.ModeKeys.TRAIN:
optimizer = ...
sync = tf.compat.v1.train.SyncReplicasOptimizer(opt=optimizer, ...)
update_op = tf.contrib.layers.optimize_loss(optimizer=sync,
loss=model_fn_ops.loss, ...)
hooks = [sync.make_session_run_hook(is_chief)]
... update train_op and hooks in ModelFnOps and return
Attributes |
logits_dimension
|
Size of the last dimension of the logits Tensor .
Typically, logits is of shape [batch_size, logits_dimension] .
|
Methods
create_model_fn_ops
View source
@abc.abstractmethod
create_model_fn_ops(
features, mode, labels=None, train_op_fn=None, logits=None, logits_input=None,
scope=None
)
Returns ModelFnOps
that a model_fn can return.
Please note that,
- Exactly one of
logits
and logits_input
must be provided.
- All args must be passed via name.
Args |
features
|
Input dict of Tensor objects.
|
mode
|
Estimator's ModeKeys .
|
labels
|
Labels Tensor , or dict of same.
|
train_op_fn
|
Function that takes a scalar loss Tensor and returns an op
to optimize the model with the loss. This is used in TRAIN mode and
must not be None. None is allowed in other modes. If you want to
optimize loss yourself you can pass no_op_train_fn and then use
ModeFnOps.loss to compute and apply gradients.
|
logits
|
logits Tensor to be used by the head.
|
logits_input
|
Tensor from which to build logits, often needed when you
don't want to compute the logits. Typically this is the activation of
the last hidden layer in a DNN. Some heads (like the ones responsible
for candidate sampling) intrinsically avoid computing full logits and
only accepts logits_input.
|
scope
|
Optional scope for variable_scope .
|
Returns |
An instance of ModelFnOps .
|
Raises |
ValueError
|
If mode is not recognized.
|
ValueError
|
If neither or both of logits and logits_input is provided.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2020-10-01 UTC.
[null,null,["Last updated 2020-10-01 UTC."],[],[],null,["# tf.contrib.learn.Head\n\n\u003cbr /\u003e\n\n|--------------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/contrib/learn/python/learn/estimators/head.py#L59-L187) |\n\nInterface for the head/top of a model.\n\nTHIS CLASS IS DEPRECATED. See\n[contrib/learn/README.md](https://www.tensorflow.org/code/tensorflow/contrib/learn/README.md)\nfor general migration instructions.\n\nGiven logits (or output of a hidden layer), a Head knows how to compute\npredictions, loss, default metric and export signature. It is meant to,\n\n1) Simplify writing model_fn and to make model_fn more configurable\n2) Support wide range of machine learning models. Since most heads can work\nwith logits, they can support DNN, RNN, Wide, Wide\\&Deep,\nGlobal objectives, Gradient boosted trees and many other types\nof machine learning models.\n2) To allow users to seamlessly switch between 1 to n heads for multi\nobjective learning (See _MultiHead implementation for more details)\n\n#### Common usage:\n\nHere is simplified model_fn to build a multiclass DNN model.\n\u003e\n\u003e def _my_dnn_model_fn(features, labels, mode, params, config=None):\n\u003e # Optionally your callers can pass head to model_fn as a param.\n\u003e head = tf.contrib.learn.multi_class_head(...)\n\u003e input = tf.contrib.layers.input_from_feature_columns(features, ...)\n\u003e last_hidden_layer_out = tf.contrib.layers.stack(\n\u003e input, tf.contrib.layers.fully_connected, [1000, 500])\n\u003e logits = tf.contrib.layers.fully_connected(\n\u003e last_hidden_layer_out, head.logits_dimension, activation_fn=None)\n\u003e\n\u003e def _train_op_fn(loss):\n\u003e return optimizer.minimize(loss)\n\u003e\n\u003e return head.create_model_fn_ops(\n\u003e features=features,\n\u003e labels=labels,\n\u003e mode=mode,\n\u003e train_op_fn=_train_op_fn,\n\u003e logits=logits,\n\u003e scope=...)\n\nMost heads also support logits_input which is typically the output of the last\nhidden layer. Some heads (like heads responsible for candidate sampling or\nhierarchical softmax) intrinsically will not support logits and you have\nto pass logits_input. Here is a common usage,\n\u003e\n\u003e return head.create_model_fn_ops(\n\u003e features=features,\n\u003e labels=labels,\n\u003e mode=mode,\n\u003e train_op_fn=_train_op_fn,\n\u003e logits_input=last_hidden_layer_out,\n\u003e scope=...)\n\u003e ```python\n\u003e\n\u003e There are cases where computing and applying gradients can not be meaningfully\n\u003e captured with train_op_fn we support (for example, with sync optimizer). In\n\u003e such case, you can take the responsibility on your own. Here is a common\n\u003e use case,\n\u003e ```python\n\u003e model_fn_ops = head.create_model_fn_ops(\n\u003e features=features,\n\u003e labels=labels,\n\u003e mode=mode,\n\u003e train_op_fn=tf.contrib.learn.no_op_train_fn,\n\u003e logits=logits,\n\u003e scope=...)\n\u003e if mode == tf.contrib.learn.ModeKeys.TRAIN:\n\u003e optimizer = ...\n\u003e sync = tf.compat.v1.train.SyncReplicasOptimizer(opt=optimizer, ...)\n\u003e update_op = tf.contrib.layers.optimize_loss(optimizer=sync,\n\u003e loss=model_fn_ops.loss, ...)\n\u003e hooks = [sync.make_session_run_hook(is_chief)]\n\u003e ... update train_op and hooks in ModelFnOps and return\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Attributes ---------- ||\n|--------------------|---------------------------------------------------------------------------------------------------------------------------|\n| `logits_dimension` | Size of the last dimension of the logits `Tensor`. \u003cbr /\u003e Typically, logits is of shape `[batch_size, logits_dimension]`. |\n\n\u003cbr /\u003e\n\nMethods\n-------\n\n### `create_model_fn_ops`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/contrib/learn/python/learn/estimators/head.py#L148-L187) \n\n @abc.abstractmethod\n create_model_fn_ops(\n features, mode, labels=None, train_op_fn=None, logits=None, logits_input=None,\n scope=None\n )\n\nReturns `ModelFnOps` that a model_fn can return.\n\nPlease note that,\n\n- Exactly one of `logits` and `logits_input` must be provided.\n- All args must be passed via name.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ||\n|----------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `features` | Input `dict` of `Tensor` objects. |\n| `mode` | Estimator's `ModeKeys`. |\n| `labels` | Labels `Tensor`, or `dict` of same. |\n| `train_op_fn` | Function that takes a scalar loss `Tensor` and returns an op to optimize the model with the loss. This is used in TRAIN mode and must not be None. None is allowed in other modes. If you want to optimize loss yourself you can pass `no_op_train_fn` and then use ModeFnOps.loss to compute and apply gradients. |\n| `logits` | logits `Tensor` to be used by the head. |\n| `logits_input` | `Tensor` from which to build logits, often needed when you don't want to compute the logits. Typically this is the activation of the last hidden layer in a DNN. Some heads (like the ones responsible for candidate sampling) intrinsically avoid computing full logits and only accepts logits_input. |\n| `scope` | Optional scope for `variable_scope`. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| An instance of `ModelFnOps`. ||\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ||\n|--------------|----------------------------------------------------------------|\n| `ValueError` | If `mode` is not recognized. |\n| `ValueError` | If neither or both of `logits` and `logits_input` is provided. |\n\n\u003cbr /\u003e"]]