tf_agents.agents.DqnAgent

A DQN Agent.

Inherits From: TFAgent

Used in the notebooks

Used in the tutorials

Implements the DQN algorithm from

"Human level control through deep reinforcement learning" Mnih et al., 2015 https://deepmind.com/research/dqn/

This agent also implements n-step updates. See "Rainbow: Combining Improvements in Deep Reinforcement Learning" by Hessel et al., 2017, for a discussion on its benefits: https://arxiv.org/abs/1710.02298

time_step_spec A TimeStep spec of the expected time_steps.
action_spec A nest of BoundedTensorSpec representing the actions.
q_network A tf_agents.network.Network to be used by the agent. The network will be called with call(observation, step_type) and should emit logits over the action space.
optimizer The optimizer to use for training.
observation_and_action_constraint_splitter A function used to process observations with action constraints. These constraints can indicate, for example, a mask of valid/invalid actions for a given state of the environment. The function takes in a full observation and returns a tuple consisting of 1) the part of the observation intended as input to the network and 2) the constraint. An example observation_and_action_constraint_splitter could be as simple as: def observation_and_action_constraint_splitter(observation): return observation['network_input'], observation['constraint'] Note: when using observation_and_action_constraint_splitter, make sure the provided q_network is compatible with the network-specific half of the output of the observation_and_action_constraint_splitter. In particular, observation_and_action_constraint_splitter will be called on the observation before passing to the network. If observation_and_action_constraint_splitter is None, action constraints are not applied.
epsilon_greedy probability of choosing a random action in the default epsilon-greedy collect policy (used only if a wrapper is not provided to the collect_policy method). Only one of epsilon_greedy and boltzmann_temperature should be provided.
n_step_update The number of steps to consider when computing TD error and TD loss. Defaults to single-step updates. Note that this requires the user to call train on Trajectory objects with a time dimension of n_step_update + 1. However, note that we do not yet support n_step_update > 1 in the case of RNNs (i.e., non-empty q_network.state_spec).
boltzmann_temperature Temperature value to use for Boltzmann sampling of the actions during data collection. The closer to 0.0, the higher the probability of choosing the best action. Only one of epsilon_greedy and boltzmann_temperature should be provided.
emit_log_probability Whether policies emit log probabilities or not.
target_q_network (Optional.) A tf_agents.network.Network to be used as the target network during Q learning. Every target_update_period train steps, the weights from q_network are copied (possibly with smoothing via target_update_tau) to target_q_network. If target_q_network is not provided, it is created by making a copy of q_network, which initializes a new network with the same structure and its own layers and weights. Network copying is performed via the Network.copy superclass method, and may inadvertently lead to the resulting network to share weights with the original. This can happen if, for example, the original network accepted a pre-built Keras layer in its __init__, or accepted a Keras layer that wasn't built, but neglected to create a new copy. In these cases, it is up to you to provide a target Network having weights that are not shared with the original q_network. If you provide a target_q_network that shares any weights with q_network, a warning will be logged but no exception is thrown. Note; shallow copies of Keras layers may be built via the code python new_layer = type(layer).from_config(layer.get_config())
target_update_tau Factor for soft update of the target networks.
target_update_period Period for soft update of the target networks.
td_errors_loss_fn A function for computing the TD errors loss. If None, a default value of element_wise_huber_loss is used. This function takes as input the target and the estimated Q values and returns the loss for each element of the batch.
gamma A discount factor for future rewards.
reward_scale_factor Multiplicative scale for the reward.
gradient_clipping Norm length to clip gradients.
debug_summaries A bool to gather debug summaries.
summarize_grads_and_vars If True, gradient and network variable summaries will be written during training.
train_step_counter An optional counter to increment every time the train op is run. Defaults to the global_step.
training_data_spec A nest of TensorSpec specifying the structure of data the train() function expects. If None, defaults to the trajectory_spec of the collect_policy.
name The name of this agent. All variables in this module will fall under that name. Defaults to the class name.

ValueError If action_spec contains more than one action or action spec minimum is not equal to 0.
ValueError If the q networks do not emit floating point outputs with inner shape matching action_spec.
NotImplementedError If q_network has non-empty state_spec (i.e., an RNN is provided) and n_step_update > 1.

action_spec TensorSpec describing the action produced by the agent.
collect_data_context

collect_data_spec Returns a Trajectory spec, as expected by the collect_policy.
collect_policy Return a policy that can be used to collect data from the environment.
data_context

debug_summaries

policy Return the current policy held by the agent.
summaries_enabled

summarize_grads_and_vars

time_step_spec Describes the TimeStep tensors expected by the agent.
train_sequence_length The number of time steps needed in experience tensors passed to train.

Train requires experience to be a Trajectory containing tensors shaped [B, T, ...]. This argument describes the value of T required.

For example, for non-RNN DQN training, T=2 because DQN requires single transitions.

If this value is None, then train can handle an unknown T (it can be determined at runtime from the data). Most RNN-based agents fall into this category.

train_step_counter

training_data_spec Returns a trajectory spec, as expected by the train() function.

Methods

initialize

View source

Initializes the agent.

Returns
An operation that can be used to initialize the agent.

Raises
RuntimeError If the class was not initialized properly (super.__init__ was not called).

loss

View source

Gets loss from the agent.

If the user calls this from _train, it must be in a tf.GradientTape scope in order to apply gradients to trainable variables. If intermediate gradient steps are needed, _loss and _train will return different values since _loss only supports updating all gradients at once after all losses have been calculated.

Args
experience A batch of experience data in the form of a Trajectory. The structure of experience must match that of self.training_data_spec. All tensors in experience must be shaped [batch, time, ...] where time must be equal to self.train_step_length if that property is not None.
weights (optional). A Tensor, either 0-D or shaped [batch], containing weights to be used when calculating the total train loss. Weights are typically multiplied elementwise against the per-batch loss, but the implementation is up to the Agent.
training Explicit argument to pass to loss. This typically affects network computation paths like dropout and batch normalization.
**kwargs Any additional data as args to loss.

Returns
A LossInfo loss tuple containing loss and info tensors.

Raises
RuntimeError If the class was not initialized properly (super.__init__ was not called).

post_process_policy

View source

Post process policies after training.

The policies of some agents require expensive post processing after training before they can be used. e.g. A Recommender agent might require rebuilding an index of actions. For such agents, this method will return a post processed version of the policy. The post processing may either update the existing policies in place or create a new policy, depnding on the agent. The default implementation for agents that do not want to override this method is to return agent.policy.

Returns
The post processed policy.

preprocess_sequence

View source

Defines preprocess_sequence function to be fed into replay buffers.

This defines how we preprocess the collected data before training. Defaults to pass through for most agents. Structure of experience must match that of self.collect_data_spec.

Args
experience a Trajectory shaped [batch, time, ...] or [time, ...] which represents the collected experience data.

Returns
A post processed Trajectory with the same shape as the input.

train

View source

Trains the agent.

Args
experience A batch of experience data in the form of a Trajectory. The structure of experience must match that of self.training_data_spec. All tensors in experience must be shaped [batch, time, ...] where time must be equal to self.train_step_length if that property is not None.
weights (optional). A Tensor, either 0-D or shaped [batch], containing weights to be used when calculating the total train loss. Weights are typically multiplied elementwise against the per-batch loss, but the implementation is up to the Agent.
**kwargs Any additional data to pass to the subclass.

Returns
A LossInfo loss tuple containing loss and info tensors.

  • In eager mode, the loss values are first calculated, then a train step is performed before they are returned.
  • In graph mode, executing any or all of the loss tensors will first calculate the loss value(s), then perform a train step, and return the pre-train-step LossInfo.

Raises
RuntimeError If the class was not initialized properly (super.__init__ was not called).