View source on GitHub |
Condition that is satisfied when a new best metric is achieved.
orbit.actions.NewBestMetric(
metric: Union[str, MetricFn],
higher_is_better: bool = True,
filename: Optional[str] = None,
write_metric=True
)
This class keeps track of the best metric value seen so far, optionally in a persistent (preemption-safe) way.
Two methods are provided, which each satisfy the Action
protocol: test
for
only testing whether a new best metric is achieved by a given train/eval
output, and commit
, which both tests and records the new best metric value
if it is achieved. These separate methods enable the same NewBestMetric
instance to be reused as a condition multiple times, and can also provide
additional preemption/failure safety. For example, to avoid updating the best
metric if a model export fails or is pre-emptied:
new_best_metric = orbit.actions.NewBestMetric(
'accuracy', filename='/model/dir/best_metric')
action = orbit.actions.ConditionalAction(
condition=new_best_metric.test,
action=[
orbit.actions.ExportSavedModel(...),
new_best_metric.commit
])
The default __call__
implementation is equivalent to commit
.
This class is safe to use in multi-client settings if all clients can be
guaranteed to compute the same metric. However when saving metrics it may be
helpful to avoid unnecessary writes by setting the write_value
parameter to
False
for most clients.
Methods
commit
commit(
output: runner.Output
) -> bool
Tests output
and updates the current best value if necessary.
Unlike test
above, if output
does contain a new best metric value, this
method does save it (i.e., subsequent calls to this method with the same
output
will return False
).
Args | |
---|---|
output
|
The train or eval output to test. |
Returns | |
---|---|
True if output contains a new best metric value, False otherwise.
|
metric_value
metric_value(
output: runner.Output
) -> float
Computes the metric value for the given output
.
test
test(
output: runner.Output
) -> bool
Tests output
to see if it contains a new best metric value.
If output
does contain a new best metric value, this method does not
save it (i.e., calling this method multiple times in a row with the same
output
will continue to return True
).
Args | |
---|---|
output
|
The train or eval output to test. |
Returns | |
---|---|
True if output contains a new best metric value, False otherwise.
|
__call__
__call__(
output: runner.Output
) -> bool
Tests output
and updates the current best value if necessary.
This is equivalent to commit
below.
Args | |
---|---|
output
|
The train or eval output to test. |
Returns | |
---|---|
True if output contains a new best metric value, False otherwise.
|