tf_agents.agents.ppo.ppo_kl_penalty_agent.PPOKLPenaltyAgent

A PPO Agent implementing the KL penalty loss.

Inherits From: PPOAgent

time_step_spec A TimeStep spec of the expected time_steps.
action_spec A nest of BoundedTensorSpec representing the actions.
actor_net A network.DistributionNetwork which maps observations to action distributions. Commonly, it is set to actor_distribution_network.ActorDistributionNetwork.
value_net A Network which returns the value prediction for input states, with call(observation, step_type, network_state). Commonly, it is set to value_network.ValueNetwork.
num_epochs Number of epochs for computing policy updates. (Schulman,2017) sets this to 10 for Mujoco, 15 for Roboschool and 3 for Atari.
initial_adaptive_kl_beta Initial value for beta coefficient of adaptive KL penalty. This initial value is not important in practice because the algorithm quickly adjusts to it. A common default is 1.0.
adaptive_kl_target Desired KL target for policy updates. If actual KL is far from this target, adaptive_kl_beta will be updated. You should tune this for your environment. 0.01 was found to perform well for Mujoco.
adaptive_kl_tolerance A tolerance for adaptive_kl_beta. Mean KL above (1 + tol) * adaptive_kl_target, or below (1 - tol) * adaptive_kl_target, will cause adaptive_kl_beta to be updated. 0.5 was chosen heuristically in the paper, but the algorithm is not very sensitive to it.
optimizer Optimizer to use for the agent, default to using tf.compat.v1.train.AdamOptimizer.
use_gae If True, uses generalized advantage estimation for computing per-timestep advantage. Else, just subtracts value predictions from empirical return.
use_td_lambda_return If True, uses td_lambda_return for training value function; here: td_lambda_return = gae_advantage + value_predictions. use_gae must be set to True as well to enable TD -lambda returns. If use_td_lambda_return is set to True while use_gae is False, the empirical return will be used and a warning will be logged.
lambda_value Lambda parameter for TD-lambda computation. Default to 0.95 which is the value used for all environments from the paper.
discount_factor Discount factor for return computation. Default to 0.99 which is the value used for all environments from the paper.
value_pred_loss_coef Multiplier for value prediction loss to balance with policy gradient loss. Default to 0.5, which was used for all environments in the OpenAI baseline implementation. This parameters is irrelevant unless you are sharing part of actor_net and value_net. In that case, you would want to tune this coeeficient, whose value depends on the network architecture of your choice
entropy_regularization Coefficient for entropy regularization loss term. Default to 0.0 because no entropy bonus was applied in the PPO paper.
policy_l2_reg Coefficient for L2 regularization of unshared actor_net weights. Default to 0.0 because no L2 regularization was applied on the policy network weights in the PPO paper.
value_function_l2_reg Coefficient for l2 regularization of unshared value function weights. Default to 0.0 because no L2 regularization was applied on the policy network weights in the PPO paper.
shared_vars_l2_reg Coefficient for l2 regularization of weights shared between actor_net and value_net. Default to 0.0 because no L2 regularization was applied on either network in the PPO paper.
normalize_observations If True (default False), keeps moving mean and variance of observations and normalizes incoming observations. Additional optimization proposed in (Ilyas et al., 2018). If true, and the observation spec is not tf.float32 (such as Atari), please manually convert the observation spec received from the environment to tf.float32 before creating the networks. Otherwise, the normalized input to the network (float32) will have a different dtype as what the network expects, resulting in a mismatch error.

Example usage:

observation_tensor_spec, action_spec, time_step_tensor_spec = (
spec_utils.get_tensor_specs(env))
normalized_observation_tensor_spec = tf.nest.map_structure(
lambda s: tf.TensorSpec(
dtype=tf.float32, shape=s.shape, name=s.name
),
observation_tensor_spec
)

actor_net = actor_distribution_network.ActorDistributionNetwork(
normalized_observation_tensor_spec, ...)
value_net = value_network.ValueNetwork(
normalized_observation_tensor_spec, ...)
# Note that the agent still uses the original time_step_tensor_spec
# from the environment.
agent = ppo_clip_agent.PPOClipAgent(
time_step_tensor_spec, action_spec, actor_net, value_net, ...)

normalize_rewards If True, keeps moving variance of rewards and normalizes incoming rewards. While not mentioned directly in the PPO paper, reward normalization was implemented in OpenAI baselines and (Ilyas et al., 2018) pointed out that it largely improves performance. You may refer to Figure 1 of https://arxiv.org/pdf/1811.02553.pdf for a comparison with and without reward scaling.
reward_norm_clipping Value above and below to clip normalized reward. Additional optimization proposed in (Ilyas et al., 2018) set to 5 or 10.
log_prob_clipping +/- value for clipping log probs to prevent inf / NaN values. Default: no clipping.
gradient_clipping Norm length to clip gradients. Default: no clipping.
value_clipping Difference between new and old value predictions are clipped to this threshold. Value clipping could be helpful when training very deep networks. Default: no clipping.
kl_cutoff_coef kl_cutoff_coef and kl_cutoff_factor are additional params if one wants to use a KL cutoff loss term in addition to the adaptive KL loss term. Default to 0.0 to disable the KL cutoff loss term as this was not used in the paper. kl_cutoff_coef is the coefficient to mulitply by the KL cutoff loss term, before adding to the total loss function.
kl_cutoff_factor Only meaningful when kl_cutoff_coef > 0.0. A multipler used for calculating the KL cutoff ( = kl_cutoff_factor * adaptive_kl_target). If policy KL averaged across the batch changes more than the cutoff, a squared cutoff loss would be added to the loss function.
check_numerics If true, adds tf.debugging.check_numerics to help find NaN / Inf values. For debugging only.
debug_summaries A bool to gather debug summaries.
compute_value_and_advantage_in_train A bool to indicate where value prediction and advantage calculation happen. If True, both happen in agent.train(). If False, value prediction is computed during data collection. This argument must be set to False if mini batch learning is enabled.
update_normalizers_in_train A bool to indicate whether normalizers are updated at the end of the train method. Set to False if mini batch learning is enabled, or if train is called on multiple iterations of the same trajectories. In that case, you would need to call the update_reward_normalizer and update_observation_normalizer methods after all iterations of the same trajectory are done. This ensures that normalizers are updated in the same way as (Schulman, 2017).
summarize_grads_and_vars If true, gradient summaries will be written.
train_step_counter An optional counter to increment every time the train op is run. Defaults to the global_step.
name The name of this agent. All variables in this module will fall under that name. Defaults to the class name.

ValueError If the actor_net is not a DistributionNetwork or value_net is not a Network.
ValueError If kl_cutoff_coef > 0.0 (indicating that a KL cutoff loss term will not be added), but kl_cutoff_factor is None.

action_spec TensorSpec describing the action produced by the agent.
actor_net Returns actor_net TensorFlow template function.
collect_data_spec Returns a Trajectory spec, as expected by the collect_policy.
collect_policy Return a policy that can be used to collect data from the environment.
data_context

debug_summaries

policy Return the current policy held by the agent.
summaries_enabled

summarize_grads_and_vars

time_step_spec Describes the TimeStep tensors expected by the agent.
train_argspec TensorSpec describing extra supported kwargs to train().
train_sequence_length The number of time steps needed in experience tensors passed to train.

Train requires experience to be a Trajectory containing tensors shaped [B, T, ...]. This argument describes the value of T required.

For example, for non-RNN DQN training, T=2 because DQN requires single transitions.

If this value is None, then train can handle an unknown T (it can be determined at runtime from the data). Most RNN-based agents fall into this category.

train_step_counter

training_data_spec Returns a trajectory spec, as expected by the train() function.
validate_args Whether train & preprocess_sequence validate input & output args.

Methods

adaptive_kl_loss

View source

compute_advantages

View source

Compute advantages, optionally using GAE.

Based on baselines ppo1 implementation. Removes final timestep, as it needs to use this timestep for next-step value prediction for TD error computation.

Args
rewards Tensor of per-timestep rewards.
returns Tensor of per-timestep returns.
discounts Tensor of per-timestep discounts. Zero for terminal timesteps.
value_preds Cached value estimates from the data-collection policy.

Returns
advantages Tensor of length (len(rewards) - 1), because the final timestep is just used for next-step value prediction.

compute_return_and_advantage

View source

Compute the Monte Carlo return and advantage.

Normalazation will be applied to the computed returns and advantages if it's enabled.

Args
next_time_steps batched tensor of TimeStep tuples after action is taken.
value_preds Batched value prediction tensor. Should have one more entry in time index than time_steps, with the final value corresponding to the value prediction of the final state.

Returns
tuple of (return, normalized_advantage), both are batched tensors.

entropy_regularization_loss

View source

Create regularization loss tensor based on agent parameters.

get_loss

View source

Compute the loss and create optimization op for one training epoch.

All tensors should have a single batch dimension.

Args
time_steps A minibatch of TimeStep tuples.
actions A minibatch of actions.
act_log_probs A minibatch of action probabilities (probability under the sampling policy).
returns A minibatch of per-timestep returns.
normalized_advantages A minibatch of normalized per-timestep advantages.
action_distribution_parameters Parameters of data-collecting action distribution. Needed for KL computation.
weights Optional scalar or element-wise (per-batch-entry) importance weights. Includes a mask for invalid timesteps.
train_step A train_step variable to increment for each train step. Typically the global_step.
debug_summaries True if debug summaries should be created.
old_value_predictions (Optional) The saved value predictions, used for calculating the value estimation loss when value clipping is performed.
training Whether this loss is being used for training.

Returns
A tf_agent.LossInfo named tuple with the total_loss and all intermediate losses in the extra field contained in a PPOLossInfo named tuple.

initialize

View source

Initializes the agent.

Returns
An operation that can be used to initialize the agent.

Raises
RuntimeError If the class was not initialized properly (super.__init__ was not called).

kl_cutoff_loss

View source

kl_penalty_loss

View source

Compute a loss that penalizes policy steps with high KL.

Based on KL divergence from old (data-collection) policy to new (updated) policy.

All tensors should have a single batch dimension.

Args
time_steps TimeStep tuples with observations for each timestep. Used for computing new action distributions.
action_distribution_parameters Action distribution params of the data collection policy, used for reconstruction old action distributions.
current_policy_distribution The policy distribution, evaluated on all time_steps.
weights Optional scalar or element-wise (per-batch-entry) importance weights. Inlcudes a mask for invalid timesteps.
debug_summaries True if debug summaries should be created.

Returns
kl_penalty_loss The sum of a squared penalty for KL over a constant threshold, plus an adaptive penalty that encourages updates toward a target KL divergence.

l2_regularization_loss

View source

policy_gradient_loss

View source

Create tensor for policy gradient loss.

All tensors should have a single batch dimension.

Args
time_steps TimeSteps with observations for each timestep.
actions Tensor of actions for timesteps, aligned on index.
sample_action_log_probs Tensor of sample probability of each action.
advantages Tensor of advantage estimate for each timestep, aligned on index. Works better when advantage estimates are normalized.
current_policy_distribution The policy distribution, evaluated on all time_steps.
weights Optional scalar or element-wise (per-batch-entry) importance weights. Includes a mask for invalid timesteps.
debug_summaries True if debug summaries should be created.

Returns
policy_gradient_loss A tensor that will contain policy gradient loss for the on-policy experience.

preprocess_sequence

View source

Defines preprocess_sequence function to be fed into replay buffers.

This defines how we preprocess the collected data before training. Defaults to pass through for most agents. Structure of experience must match that of self.collect_data_spec.

Args
experience a Trajectory shaped [batch, time, ...] or [time, ...] which represents the collected experience data.

Returns
A post processed Trajectory with the same shape as the input.

Raises
TypeError If experience does not match self.collect_data_spec structure types.

train

View source

Trains the agent.

Args
experience A batch of experience data in the form of a Trajectory. The structure of experience must match that of self.training_data_spec. All tensors in experience must be shaped [batch, time, ...] where time must be equal to self.train_step_length if that property is not None.
weights (optional). A Tensor, either 0-D or shaped [batch], containing weights to be used when calculating the total train loss. Weights are typically multiplied elementwise against the per-batch loss, but the implementation is up to the Agent.
**kwargs Any additional data as declared by self.train_argspec.

Returns
A LossInfo loss tuple containing loss and info tensors.

  • In eager mode, the loss values are first calculated, then a train step is performed before they are returned.
  • In graph mode, executing any or all of the loss tensors will first calculate the loss value(s), then perform a train step, and return the pre-train-step LossInfo.

Raises
TypeError If validate_args is True and: Experience is not type Trajectory; or if experience does not match self.training_data_spec structure types.
ValueError If validate_args is True and: Experience tensors' time axes are not compatible with self.train_sequence_length; or if experience does not match self.training_data_spec structure.
ValueError If validate_args is True and the user does not pass **kwargs matching self.train_argspec.
RuntimeError If the class was not initialized properly (super.__init__ was not called).

update_adaptive_kl_beta

View source

Create update op for adaptive KL penalty coefficient.

Args
kl_divergence KL divergence of old policy to new policy for all timesteps.

Returns
update_op An op which runs the update for the adaptive kl penalty term.

update_observation_normalizer

View source

update_reward_normalizer

View source

value_estimation_loss

View source

Computes the value estimation loss for actor-critic training.

All tensors should have a single batch dimension.

Args
time_steps A batch of timesteps.
returns Per-timestep returns for value function to predict. (Should come from TD-lambda computation.)
weights Optional scalar or element-wise (per-batch-entry) importance weights. Includes a mask for invalid timesteps.
old_value_predictions (Optional) The saved value predictions from policy_info, required when self._value_clipping > 0.
debug_summaries True if debug summaries should be created.
training Whether this loss is going to be used for training.

Returns
value_estimation_loss A scalar value_estimation_loss loss.

Raises
ValueError If old_value_predictions was not passed in, but value clipping was performed.