Class to build Bernoulli Thompson Sampling policies.
Inherits From: TFPolicy
tf_agents.bandits.policies.bernoulli_thompson_sampling_policy.BernoulliThompsonSamplingPolicy(
    time_step_spec: tf_agents.typing.types.TimeStep,
    action_spec: tf_agents.typing.types.NestedTensorSpec,
    alpha: Sequence[tf.Variable],
    beta: Sequence[tf.Variable],
    observation_and_action_constraint_splitter: Optional[types.Splitter] = None,
    emit_policy_info: Sequence[Text] = (),
    name: Optional[Text] = None
)
| Args | 
|---|
| time_step_spec | A TimeStepspec of the expected time_steps. | 
| action_spec | A nest of BoundedTensorSpec representing the actions. | 
| alpha | list or tuple of tf.Variable's. It holds the alphaparameter of
the beta distribution of each arm. | 
| beta | list or tuple of tf.Variable's. It holds the betaparameter of the
beta distribution of each arm. | 
| observation_and_action_constraint_splitter | A function used for masking
valid/invalid actions with each state of the environment. The function
takes in a full observation and returns a tuple consisting of 1) the
part of the observation intended as input to the network and 2) the
mask.  The mask should be a 0-1 Tensorof shape[batch_size,
num_actions]. This function should also work with aTensorSpecas
input, and should outputTensorSpecobjects for the observation and
mask. | 
| emit_policy_info | (tuple of strings) what side information we want to get
as part of the policy info. Allowed values can be found in policy_utilities.PolicyInfo. | 
| name | The name of this policy. All variables in this module will fall
under that name. Defaults to the class name. | 
| Raises | 
|---|
| NotImplementedError | If action_speccontains more than oneBoundedTensorSpecor theBoundedTensorSpecis not valid. | 
| Attributes | 
|---|
| action_spec | Describes the TensorSpecs of the Tensors expected by step(action).actioncan be a single Tensor, or a nested dict, list or tuple of
Tensors.
 | 
| collect_data_spec | Describes the Tensors written when using this policy with an environment. | 
| emit_log_probability | Whether this policy instance emits log probabilities or not. | 
| info_spec | Describes the Tensors emitted as info by actionanddistribution.infocan be an empty tuple, a single Tensor, or a nested dict,
list or tuple of Tensors.
 | 
| observation_and_action_constraint_splitter |  | 
| policy_state_spec | Describes the Tensors expected by step(_, policy_state).policy_statecan be an empty tuple, a single Tensor, or a nested dict,
list or tuple of Tensors.
 | 
| policy_step_spec | Describes the output of action(). | 
| time_step_spec | Describes the TimeSteptensors returned bystep(). | 
| trajectory_spec | Describes the Tensors written when using this policy with an environment. | 
| validate_args | Whether action&distributionvalidate input and output args. | 
Methods
action
View source
action(
    time_step: tf_agents.trajectories.TimeStep,
    policy_state: tf_agents.typing.types.NestedTensor = (),
    seed: Optional[types.Seed] = None
) -> tf_agents.trajectories.PolicyStep
Generates next action given the time_step and policy_state.
| Args | 
|---|
| time_step | A TimeSteptuple corresponding totime_step_spec(). | 
| policy_state | A Tensor, or a nested dict, list or tuple of Tensors
representing the previous policy_state. | 
| seed | Seed to use if action performs sampling (optional). | 
| Returns | 
|---|
| A PolicyStepnamed tuple containing:action: An action Tensor matching theaction_spec.state: A policy state tensor to be fed into the next call to action.info: Optional side information such as action log probabilities. | 
| Raises | 
|---|
| RuntimeError | If subclass init didn't call super().init.
ValueError or TypeError: If validate_args is Trueand inputs or
  outputs do not matchtime_step_spec,policy_state_spec,
  orpolicy_step_spec. | 
distribution
View source
distribution(
    time_step: tf_agents.trajectories.TimeStep,
    policy_state: tf_agents.typing.types.NestedTensor = ()
) -> tf_agents.trajectories.PolicyStep
Generates the distribution over next actions given the time_step.
| Args | 
|---|
| time_step | A TimeSteptuple corresponding totime_step_spec(). | 
| policy_state | A Tensor, or a nested dict, list or tuple of Tensors
representing the previous policy_state. | 
| Returns | 
|---|
| A PolicyStepnamed tuple containing:action: A tf.distribution capturing the distribution of next actions.state: A policy state tensor for the next call to distribution.info: Optional side information such as action log probabilities.
 | 
| Raises | 
|---|
| ValueError or TypeError: If validate_args is Trueand inputs or
outputs do not matchtime_step_spec,policy_state_spec,
orpolicy_step_spec. | 
get_initial_state
View source
get_initial_state(
    batch_size: Optional[types.Int]
) -> tf_agents.typing.types.NestedTensor
Returns an initial state usable by the policy.
| Args | 
|---|
| batch_size | Tensor or constant: size of the batch dimension. Can be None
in which case no dimensions gets added. | 
| Returns | 
|---|
| A nested object of type policy_statecontaining properly
initialized Tensors. | 
update
View source
update(
    policy,
    tau: float = 1.0,
    tau_non_trainable: Optional[float] = None,
    sort_variables_by_name: bool = False
) -> tf.Operation
Update the current policy with another policy.
This would include copying the variables from the other policy.
| Args | 
|---|
| policy | Another policy it can update from. | 
| tau | A float scalar in [0, 1]. When tau is 1.0 (the default), we do a hard
update. This is used for trainable variables. | 
| tau_non_trainable | A float scalar in [0, 1] for non_trainable variables.
If None, will copy from tau. | 
| sort_variables_by_name | A bool, when True would sort the variables by name
before doing the update. | 
| Returns | 
|---|
| An TF op to do the update. |