Abstract base class for TF-based RL and Bandits agents.

Used in the notebooks

Used in the tutorials

The agent serves the following purposes:

  • Training by reading minibatches of experience, and updating some set of network weights (using the train method).

  • Exposing policy objects which can be used to interact with an environment: either to explore and collect new training data, or to maximize reward in the given task.

The agents' main training methods and properties are:

  • initialize: Perform any self-initialization before training.

  • train: This method reads minibatch experience from a replay buffer or logs on disk, and updates some internal networks.

  • preprocess_sequence: Some algorithms need to perform sequence preprocessing on logs containing "full episode" or "long subset" sequences, to create intermediate items that can then be used by train, even if train does not see the full sequences. In many cases this is just the identity: it passes experience through untouched. This function is typically passed to the argument

    ReplayBuffer.as_dataset(..., sequence_preprocess_fn=...)

  • training_data_spec: Property that describes the structure expected of the experience argument passed to train.

  • train_sequence_length: Property that describes the second dimension of all tensors in the experience argument passed train. All tensors passed to train must have the shape [batch_size, sequence_length, ...], and some Agents require this to be a fixed value. For example, in regular DQN, this second sequence_length dimension must be equal to 2 in all experience. In contrast, n-step DQN will have this equal to n + 1 and DQN agents constructed with RNN networks will have this equal to None, meaning any length sequences are allowed.

    This value may be None, to mean minibatches containing subsequences of any length are allowed (so long as they're all the same length). This is typically the case with agents constructed with RNN networks.

    This value is typically passed as a ReplayBuffer's as_dataset(..., num_steps=...) argument.

  • train_argspec: Property that contains a dict describing other arguments that must be passed as kwargs to train (typically empty).

  • collect_data_spec: Property that describes the structure expected of experience collected by agent.collect_policy. This is typically identical to training_data_spec, but may be different if preprocess_sequence method is not the identity. In this case, preprocess_sequence is expected to read sequences matching collect_data_spec and emit sequences matching training_data_spec.

The agent exposes TFPolicy objects for interacting with environments:

  • policy: Property that returns a policy meant for "exploiting" the environment to its best ability. This tends to mean the "production" policy that doesn't collect additional info for training. Works best when the agent is fully trained.

    "production" policies yet. We have to clean this up. In particular, we have to update PPO and SAC's policy objects.

  • collect_policy: Property that returns a policy meant for "exploring" the environment to collect more data for training. This tends to mean a policy involves some level of randomized behavior and additional info logging.

  • time_step_spec: Property describing the observation and reward signatures of the environment this agent's policies operate in.

  • action_spec: Property describing the action signatures of the environment this agent's policies operate in.

For researchers, and those developing new Agents and Policies, both the TFAgent and TFPolicy base class constructors also accept a validate_args parameter. If False, this disables all spec structure, dtype, and shape checks in the public methods of these classes. It allows algorithm developers to iterate and try different input and output structures without worrying about overly restrictive requirements like experience being a Trajectory, or input and output states being in a certain format. However, disabling argument validation can make it very hard to identify structural input or algorithmic errors; and should not be done for final, or production-ready, Agents. In addition to having implementations that may disagree with specs, this mean that the resulting Agent will no longer interact well with other parts of TF-Agents. Examples include impedance mismatches with Actor/Learner APIs, replay buffers, and the model export functionality in PolicySaver.

time_step_spec A nest of tf.TypeSpec representing the time_steps. Provided by the user.
action_spec A nest of BoundedTensorSpec representing the actions. Provided by the user.
policy An instance of tf_policy.TFPolicy representing the Agent's current policy.
collect_policy An instance of tf_policy.TFPolicy representing the Agent's current data collection policy (used to set self.step_spec).
train_sequence_length A python integer or None, signifying the number of time steps required from tensors in experience as passed to train(). All tensors in experience will be shaped [B, T, ...] but for certain agents, T should be fixed. For example, DQN requires transitions in the form of 2 time steps, so for a non-RNN DQN Agent, set this value to 2. For agents that don't care, or which can handle T unknown at graph build time (i.e. most RNN-based agents), set this argument to None.
num_outer_dims The number of outer dimensions for the agent. Must be either 1 or 2. If 2, training will require both a batch_size and time dimension on every Tensor; if 1, training will require only a batch_size outer dimension.
training_data_spec A nest of TensorSpec specifying the structure of data the train() function expects. If None, defaults to the trajectory_spec of the collect_policy.
train_argspec (Optional) Describes additional supported arguments to the train call. This must be a dict mapping strings to nests of specs. Overriding the experience arg is also supported.

Some algorithms require additional arguments to the train() call, and while TF-Agents encourages most of these to be provided in the policy_info / info field of experience, sometimes the extra information doesn't fit well, i.e., when it doesn't come from the policy.

Below is an example:

class MyAgent(TFAgent):
def __init__(self, counterfactual_training, ...):
collect_policy = ...
train_argspec = None
if counterfactual_training:
train_argspec = dict(

my_agent = MyAgent(...)

for ...:
experience, counterfactual = next(experience_and_counterfactual_iter)
loss_info = my_agent.train(experience, counterfactual=counterfactual)

debug_summaries A bool; if true, subclasses should gather debug summaries.
summarize_grads_and_vars A bool; if true, subclasses should additionally collect gradient and variable summaries.
enable_summaries A bool; if false, subclasses should not gather any summaries (debug or otherwise); subclasses should gate all summaries using either summaries_enabled, debug_summaries, or summarize_grads_and_vars properties.
train_step_counter An optional counter to increment every time the train op is run. Defaults to the global_step.
validate_args Python bool. Whether to verify inputs to, and outputs of, functions like train and preprocess_sequence against spec structures, dtypes, and shapes.

Research code may prefer to set this value to False to allow iterating on input and output structures without being hamstrung by overly rigid checking (at the cost of harder-to-debug errors).

See also TFPolicy.validate_args.

TypeError If validate_args is True and train_argspec is not a dict.
ValueError If validate_args is True and train_argspec has the keys experience or weights.
TypeError If validate_args is True and any leaf nodes in train_argspec values are not subclasses of tf.TypeSpec.
ValueError If validate_args is True and time_step_spec is not an instance of ts.TimeStep.
ValueError If num_outer_dims is not in [1, 2].

action_spec TensorSpec describing the action produced by the agent.
collect_data_spec Returns a Trajectory spec, as expected by the collect_policy.
collect_policy Return a policy that can be used to collect data from the environment.


policy Return the current policy held by the agent.


time_step_spec Describes the TimeStep tensors expected by the agent.
train_argspec TensorSpec describing extra supported kwargs to train().
train_sequence_length The number of time steps needed in experience tensors passed to train.

Train requires experience to be a Trajectory containing tensors shaped [B, T, ...]. This argument describes the value of T required.

For example, for non-RNN DQN training, T=2 because DQN requires single transitions.

If this value is None, then train can handle an unknown T (it can be determined at runtime from the data). Most RNN-based agents fall into this category.


training_data_spec Returns a trajectory spec, as expected by the train() function.
validate_args Whether train & preprocess_sequence validate input & output args.



View source

Initializes the agent.

An operation that can be used to initialize the agent.

RuntimeError If the class was not initialized properly (super.__init__ was not called).


View source

Defines preprocess_sequence function to be fed into replay buffers.

This defines how we preprocess the collected data before training. Defaults to pass through for most agents. Structure of experience must match that of self.collect_data_spec.

experience a Trajectory shaped [batch, time, ...] or [time, ...] which represents the collected experience data.

A post processed Trajectory with the same shape as the input.

TypeError If experience does not match self.collect_data_spec structure types.


View source

Trains the agent.

experience A batch of experience data in the form of a Trajectory. The structure of experience must match that of self.training_data_spec. All tensors in experience must be shaped [batch, time, ...] where time must be equal to self.train_step_length if that property is not None.
weights (optional). A Tensor, either 0-D or shaped [batch], containing weights to be used when calculating the total train loss. Weights are typically multiplied elementwise against the per-batch loss, but the implementation is up to the Agent.
**kwargs Any additional data as declared by self.train_argspec.

A LossInfo loss tuple containing loss and info tensors.

  • In eager mode, the loss values are first calculated, then a train step is performed before they are returned.
  • In graph mode, executing any or all of the loss tensors will first calculate the loss value(s), then perform a train step, and return the pre-train-step LossInfo.

TypeError If validate_args is True and: Experience is not type Trajectory; or if experience does not match self.training_data_spec structure types.
ValueError If validate_args is True and: Experience tensors' time axes are not compatible with self.train_sequence_length; or if experience does not match self.training_data_spec structure.
ValueError If validate_args is True and the user does not pass **kwargs matching self.train_argspec.
RuntimeError If the class was not initialized properly (super.__init__ was not called).