EvalSpec combines details of evaluation of the trained model as well as its
export. Evaluation consists of computing metrics to judge the performance of
the trained model. Export writes out the trained model on to external
storage.
Args
input_fn
A function that constructs the input data for evaluation. See
Premade Estimators
for more information. The function should construct and return one of
the following:
A 'tf.data.Dataset' object: Outputs of Dataset object must be a
tuple (features, labels) with same constraints as below.
A tuple (features, labels): Where features is a Tensor or a
dictionary of string feature name to Tensor and labels is a
Tensor or a dictionary of string label name to Tensor.
steps
Int. Positive number of steps for which to evaluate model. If
None, evaluates until input_fn raises an end-of-input exception. See
Estimator.evaluate for details.
name
String. Name of the evaluation if user needs to run multiple
evaluations on different data sets. Metrics for different evaluations
are saved in separate folders, and appear separately in tensorboard.
hooks
Iterable of tf.train.SessionRunHook objects to run during
evaluation.
exporters
Iterable of Exporters, or a single one, or None.
exporters will be invoked after each evaluation.
start_delay_secs
Int. Start evaluating after waiting for this many
seconds.
throttle_secs
Int. Do not re-evaluate unless the last evaluation was
started at least this many seconds ago. Of course, evaluation does not
occur if no new checkpoints are available, hence, this is the minimum.
Raises
ValueError
If any of the input arguments is invalid.
TypeError
If any of the arguments is not of the expected type.
[null,null,["Last updated 2024-01-23 UTC."],[],[],null,["# tf.estimator.EvalSpec\n\n\u003cbr /\u003e\n\n|------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/training.py#L200-L293) |\n\nConfiguration for the \"eval\" part for the `train_and_evaluate` call. (deprecated)\n\n#### View aliases\n\n\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n[`tf.compat.v1.estimator.EvalSpec`](https://www.tensorflow.org/api_docs/python/tf/estimator/EvalSpec)\n\n\u003cbr /\u003e\n\n tf.estimator.EvalSpec(\n input_fn,\n steps=100,\n name=None,\n hooks=None,\n exporters=None,\n start_delay_secs=120,\n throttle_secs=600\n )\n\n| **Deprecated:** THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.keras instead.\n\n`EvalSpec` combines details of evaluation of the trained model as well as its\nexport. Evaluation consists of computing metrics to judge the performance of\nthe trained model. Export writes out the trained model on to external\nstorage.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|--------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `input_fn` | A function that constructs the input data for evaluation. See [Premade Estimators](https://tensorflow.org/guide/premade_estimators#create_input_functions) for more information. The function should construct and return one of the following: \u003cbr /\u003e - A 'tf.data.Dataset' object: Outputs of `Dataset` object must be a tuple (features, labels) with same constraints as below. - A tuple (features, labels): Where features is a `Tensor` or a dictionary of string feature name to `Tensor` and labels is a `Tensor` or a dictionary of string label name to `Tensor`. |\n| `steps` | Int. Positive number of steps for which to evaluate model. If `None`, evaluates until `input_fn` raises an end-of-input exception. See [`Estimator.evaluate`](../../tf/compat/v1/estimator/Estimator#evaluate) for details. |\n| `name` | String. Name of the evaluation if user needs to run multiple evaluations on different data sets. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. |\n| `hooks` | Iterable of `tf.train.SessionRunHook` objects to run during evaluation. |\n| `exporters` | Iterable of `Exporter`s, or a single one, or `None`. `exporters` will be invoked after each evaluation. |\n| `start_delay_secs` | Int. Start evaluating after waiting for this many seconds. |\n| `throttle_secs` | Int. Do not re-evaluate unless the last evaluation was started at least this many seconds ago. Of course, evaluation does not occur if no new checkpoints are available, hence, this is the minimum. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|--------------|------------------------------------------------------|\n| `ValueError` | If any of the input arguments is invalid. |\n| `TypeError` | If any of the arguments is not of the expected type. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Attributes ---------- ||\n|--------------------|-----------------------------------------|\n| `input_fn` | A `namedtuple` alias for field number 0 |\n| `steps` | A `namedtuple` alias for field number 1 |\n| `name` | A `namedtuple` alias for field number 2 |\n| `hooks` | A `namedtuple` alias for field number 3 |\n| `exporters` | A `namedtuple` alias for field number 4 |\n| `start_delay_secs` | A `namedtuple` alias for field number 5 |\n| `throttle_secs` | A `namedtuple` alias for field number 6 |\n\n\u003cbr /\u003e"]]