tf_agents.agents.CqlSacAgent

A CQL-SAC Agent based on the SAC Agent.

Inherits From: SacAgent, TFAgent

time_step_spec A TimeStep spec of the expected time_steps.
action_spec A nest of BoundedTensorSpec representing the actions.
critic_network A function critic_network((observations, actions)) that returns the q_values for each observation and action.
actor_network A function actor_network(observation, action_spec) that returns action distribution.
actor_optimizer The optimizer to use for the actor network.
critic_optimizer The default optimizer to use for the critic network.
alpha_optimizer The default optimizer to use for the alpha variable.
cql_alpha The weight on CQL loss. This can be a tf.Variable.
num_cql_samples Number of samples for importance sampling in CQL.
include_critic_entropy_term Whether to include the entropy term in the target for the critic loss.
use_lagrange_cql_alpha Whether to use a Lagrange threshold to tune cql_alpha during training.
cql_alpha_learning_rate The learning rate to tune cql_alpha.
cql_tau The threshold for the expected difference in Q-values which determines the tuning of cql_alpha.
random_seed Optional seed for tf.random.
reward_noise_variance The noise variance to introduce to the rewards.
num_bc_steps Number of behavioral cloning steps.
actor_loss_weight The weight on actor loss.
critic_loss_weight The weight on critic loss.
alpha_loss_weight The weight on alpha loss.
actor_policy_ctor The policy class to use.
critic_network_2 (Optional.) A tf_agents.network.Network to be used as the second critic network during Q learning. The weights from critic_network are copied if this is not provided.
target_critic_network (Optional.) A tf_agents.network.Network to be used as the target critic network during Q learning. Every target_update_period train steps, the weights from critic_network are copied (possibly withsmoothing via target_update_tau) to target_critic_network. If target_critic_network is not provided, it is created by making a copy of critic_network, which initializes a new network with the same structure and its own layers and weights. Performing a Network.copy does not work when the network instance already has trainable parameters (e.g., has already been built, or when the network is sharing layers with another). In these cases, it is up to you to build a copy having weights that are not shared with the original critic_network, so that this can be used as a target network. If you provide a target_critic_network that shares any weights with critic_network, a warning will be logged but no exception is thrown.
target_critic_network_2 (Optional.) Similar network as target_critic_network but for the critic_network_2. See documentation for target_critic_network. Will only be used if 'critic_network_2' is also specified.
target_update_tau Factor for soft update of the target networks.
target_update_period Period for soft update of the target networks.
td_errors_loss_fn A function for computing the elementwise TD errors loss.
gamma A discount factor for future rewards.
reward_scale_factor Multiplicative scale for the reward.
initial_log_alpha Initial value for log_alpha.
use_log_alpha_in_alpha_loss A boolean, whether using log_alpha or alpha in alpha loss. Certain implementations of SAC use log_alpha as log values are generally nicer to work with.
target_entropy The target average policy entropy, for updating alpha. The default value is negative of the total number of actions.
gradient_clipping Norm length to clip gradients.
log_cql_alpha_clipping (Minimum, maximum) values to clip log CQL alpha.
softmax_temperature Temperature value which weights Q-values before the cql_loss logsumexp calculation.
bc_debug_mode Whether to run a behavioral cloning mode where the critic loss only depends on CQL loss. Useful when debugging and checking that CQL loss can be driven down to zero.
debug_summaries A bool to gather debug summaries.
summarize_grads_and_vars If True, gradient and network variable summaries will be written during training.
train_step_counter An optional counter to increment every time the train op is run. Defaults to the global_step.
name The name of this agent. All variables in this module will fall under that name. Defaults to the class name.

action_spec TensorSpec describing the action produced by the agent.
collect_data_context

collect_data_spec Returns a Trajectory spec, as expected by the collect_policy.
collect_policy Return a policy that can be used to collect data from the environment.
data_context

debug_summaries

policy Return the current policy held by the agent.
summaries_enabled

summarize_grads_and_vars

time_step_spec Describes the TimeStep tensors expected by the agent.
train_sequence_length The number of time steps needed in experience tensors passed to train.

Train requires experience to be a Trajectory containing tensors shaped [B, T, ...]. This argument describes the value of T required.

For example, for non-RNN DQN training, T=2 because DQN requires single transitions.

If this value is None, then train can handle an unknown T (it can be determined at runtime from the data). Most RNN-based agents fall into this category.

train_step_counter

training_data_spec Returns a trajectory spec, as expected by the train() function.

Methods

actor_loss

View source

Computes actor_loss equivalent to the SAC actor_loss.

Uses behavioral cloning for the first self._num_bc_steps of training.

Args
time_steps A batch of timesteps.
actions A batch of actions.
weights Optional scalar or elementwise (per-batch-entry) importance weights.
training Whether training should be applied.

Returns
actor_loss A scalar actor loss.

alpha_loss

View source

Computes the alpha_loss for EC-SAC training.

Args
time_steps A batch of timesteps.
weights Optional scalar or elementwise (per-batch-entry) importance weights.
training Whether this loss is being used during training.

Returns
alpha_loss A scalar alpha loss.

critic_loss

View source

Computes the critic loss for SAC training.

Args
time_steps A batch of timesteps.
actions A batch of actions.
next_time_steps A batch of next timesteps.
td_errors_loss_fn A function(td_targets, predictions) to compute elementwise (per-batch-entry) loss.
gamma Discount for future rewards.
reward_scale_factor Multiplicative factor to scale rewards.
weights Optional scalar or elementwise (per-batch-entry) importance weights.
training Whether this loss is being used for training.

Returns
critic_loss A scalar critic loss.

initialize

View source

Initializes the agent.

Returns
An operation that can be used to initialize the agent.

Raises
RuntimeError If the class was not initialized properly (super.__init__ was not called).

loss

View source

Gets loss from the agent.

If the user calls this from _train, it must be in a tf.GradientTape scope in order to apply gradients to trainable variables. If intermediate gradient steps are needed, _loss and _train will return different values since _loss only supports updating all gradients at once after all losses have been calculated.

Args
experience A batch of experience data in the form of a Trajectory. The structure of experience must match that of self.training_data_spec. All tensors in experience must be shaped [batch, time, ...] where time must be equal to self.train_step_length if that property is not None.
weights (optional). A Tensor, either 0-D or shaped [batch], containing weights to be used when calculating the total train loss. Weights are typically multiplied elementwise against the per-batch loss, but the implementation is up to the Agent.
training Explicit argument to pass to loss. This typically affects network computation paths like dropout and batch normalization.
**kwargs Any additional data as args to loss.

Returns
A LossInfo loss tuple containing loss and info tensors.

Raises
RuntimeError If the class was not initialized properly (super.__init__ was not called).

post_process_policy

View source

Post process policies after training.

The policies of some agents require expensive post processing after training before they can be used. e.g. A Recommender agent might require rebuilding an index of actions. For such agents, this method will return a post processed version of the policy. The post processing may either update the existing policies in place or create a new policy, depnding on the agent. The default implementation for agents that do not want to override this method is to return agent.policy.

Returns
The post processed policy.

preprocess_sequence

View source

Defines preprocess_sequence function to be fed into replay buffers.

This defines how we preprocess the collected data before training. Defaults to pass through for most agents. Structure of experience must match that of self.collect_data_spec.

Args
experience a Trajectory shaped [batch, time, ...] or [time, ...] which represents the collected experience data.

Returns
A post processed Trajectory with the same shape as the input.

train

View source

Trains the agent.

Args
experience A batch of experience data in the form of a Trajectory. The structure of experience must match that of self.training_data_spec. All tensors in experience must be shaped [batch, time, ...] where time must be equal to self.train_step_length if that property is not None.
weights (optional). A Tensor, either 0-D or shaped [batch], containing weights to be used when calculating the total train loss. Weights are typically multiplied elementwise against the per-batch loss, but the implementation is up to the Agent.
**kwargs Any additional data to pass to the subclass.

Returns
A LossInfo loss tuple containing loss and info tensors.

  • In eager mode, the loss values are first calculated, then a train step is performed before they are returned.
  • In graph mode, executing any or all of the loss tensors will first calculate the loss value(s), then perform a train step, and return the pre-train-step LossInfo.

Raises
RuntimeError If the class was not initialized properly (super.__init__ was not called).