Difference between the estimated reward of the chosen and the best action.

Inherits From: TFStepMetric

This metric measures how 'safely' the agent explores: it calculates the difference between what the agent thinks it would have gotten had it chosen the best looking action, vs the action it actually took. This metric is not equivalent to the regret, because the regret is calculated as a distance from optimality, while here everything calculated is based on the policy's 'belief'.

estimated_reward_fn A function that takes the observation as input and computes the estimated rewards that the greedy policy uses.
name (str) name of the metric
dtype dtype of the metric value.



View source

Update the metric value.

trajectory A tf_agents.trajectory.Trajectory

The arguments, for easy chaining.


View source

Initializes this Metric's variables.

Should be called after variables are created in the first execution of __call__(). If using graph execution, the return value should be run() in a session before running the op returned by __call__(). (See example above.)

If using graph execution, this returns an op to perform the initialization. Under eager execution, the variables are reset to their initial values as a side effect and this function returns None.


View source

Resets the values being tracked by the metric.


View source

Computes and returns a final value for the metric.


View source

Generates summaries against train_step and all step_metrics.

train_step (Optional) Step counter for training iterations. If None, no metric is generated against the global step.
step_metrics (Optional) Iterable of step metrics to generate summaries against.

A list of summaries.


View source

Returns op to execute to update this metric for these inputs.

Returns None if eager execution is enabled. Returns a graph-mode function if graph execution is enabled.


**kwargs A mini-batch of inputs to the Metric, passed on to call().