Configuration for the "eval" part for the train_and_evaluate
call. (deprecated)
View aliases
Compat aliases for migration
See Migration guide for more details.
tf.estimator.EvalSpec(
input_fn,
steps=100,
name=None,
hooks=None,
exporters=None,
start_delay_secs=120,
throttle_secs=600
)
EvalSpec
combines details of evaluation of the trained model as well as its
export. Evaluation consists of computing metrics to judge the performance of
the trained model. Export writes out the trained model on to external
storage.
Args | |
---|---|
input_fn
|
A function that constructs the input data for evaluation. See
Premade Estimators
for more information. The function should construct and return one of
the following:
|
steps
|
Int. Positive number of steps for which to evaluate model. If
None , evaluates until input_fn raises an end-of-input exception. See
Estimator.evaluate for details.
|
name
|
String. Name of the evaluation if user needs to run multiple evaluations on different data sets. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. |
hooks
|
Iterable of tf.train.SessionRunHook objects to run during
evaluation.
|
exporters
|
Iterable of Exporter s, or a single one, or None .
exporters will be invoked after each evaluation.
|
start_delay_secs
|
Int. Start evaluating after waiting for this many seconds. |
throttle_secs
|
Int. Do not re-evaluate unless the last evaluation was started at least this many seconds ago. Of course, evaluation does not occur if no new checkpoints are available, hence, this is the minimum. |
Raises | |
---|---|
ValueError
|
If any of the input arguments is invalid. |
TypeError
|
If any of the arguments is not of the expected type. |